April 19, 2019
# A (not so) Gentle Introduction to the PCP Theorem – Part 1

### Can a Proof be Checked Without Reading it?

### Introduction: The PCP Theorem and What’s to Come

## Table of Contents

### The Absurd Promise of the PCP Theorem

### Decision Problems and Complexity Classes

### The Complexity Classes **P** and **NP**

**P**

**NP**

#### NP-completeness

### The PCP Complexity *Classes*

#### PCP Parameters

#### PCP Verifiers, Random Coins and Relevant Bits of Proofs

#### Membership to a PCP class

#### Big O’s in PCP Classes

#### One Half of the PCP Theorem

### The Insanity of PCPs

### Mathematical Proofs are Fragile

### Why Naive PCP Provers Fail and the Power of PCP Proofs

### The Weak PCP Theorem

### Strong PCP vs Weak PCP

### What’s Next?

*This blog post was written by Olivier Bégassat, Applied Researcher at PegaSys Protocol Engineering. It is the first post of a two-part series on Probabilistically Checkable Proofs.*

Zero knowledge proof systems and/or computational integrity proof systems are pretty hot right now (and have been for a while). If you have an interest in blockchains and the cryptography/computer science that goes into them, chances are you’ve read into SNARKs, STARKs, Bulletproofs, Aurora and others, and know of their striking (present and future) applications both to privacy (Zcash) and scalability (Ignis, StarkDEX, scaling Ethereum…). In this series of blog posts, I set out to describe a foundational result that underpins all of them, the so-called PCP Theorem.

*PCPwut*? Never heard of PCPs? Here are some places you may have come across Probabilistically Checkable Proofs (PCPs):

- Both Vitalik Buterin’s second blog post on STARKs and his second blog post on SNARKs contain a tiny hint to them.
- The STARK white paper which builds on “a strict generalization” of the PCP model called the Interactive Oracle Proof (IOP) model.
- More recently, PCPs were a major theme in Chiesa’s presentation on Aurora at the 2019 Stanford Blockchain Conference.

PCPs play a prominent role, both as a tool and conceptually. As the provocative subtitle suggests (we borrowed it straight from the abstract to Madhu Sudan’s paper Probabilistically checkable proofs), the PCP theorem does something magical, something that on the face of it is quite obviously impossible : to check a proof without reading it, in the sense that reading a random 15 characters from a book doesn’t constitute reading that book.

The PCP theorem says that there is a *proof format* (a way of writing proofs in support of an assertion) quite distinct from the usual proof format encountered in mathematics and elsewhere. This proof format allows a verifier to read a *constant* number of bits from *random spots* in a proof, do some simple computations, and make an informed decision as to the assertion’s *overall correctness*. What that means, and how mindbendingly awesome it is, isn’t all that obvious, and is worth learning about.

Now for the bad news: to correctly state and understand the PCP theorem, one needs to do a little groundwork. The PCP Theorem is a result in complexity theory. It is the assertion that two (not obviously related) complexity classes, **NP** and something called **PCP**, are in fact equal:

Don’t fret if this equality is neither meaningful to you, nor in the least bit impressive. That simply means that you are my target audience! And rest assured that before we dive into the weeds, I’ll provide an example of what PCPs can do. Indeed, the purpose of this series is three-fold:

- Explain, starting from scratch, what this theorem means (post 1),
- Convince you that this is an insane statement which shouldn’t be true (post 1),
- Give you an idea why, actually – as crazy as it sounds, there might be something to it! (post 2)

These goals are reflected in the structure of the posts. We start with a made-up story (in a fictional universe where rubber ducks are all the craze), which illustrates the absurd promise of the PCP theorem. We move on to a refresher on complexity theory and its vernacular – we can’t do this without some formalization.

If you’re already familiar with decision problems, complexity classes, and **P** and **NP**, you may want to skip ahead to where we discuss the PCP complexity classes and their parameters. I won’t suppose prior knowledge on the subject, although some familiarity won’t hurt. Once we know what NP decision problems are, and what it means for a decision problem to have PCPs, we will spend some time trying to illustrate why a naïve approach to constructing a PCP prover/verifier system fails, and why the promise of PCPs is so surreal; we do this in order to convince the reader that the PCP theorem truly is shocking.

Hopefully, you will emerge somewhat bewildered and sceptical as to the main claim. In a second blog post, we go on to prove a weak form of the PCP theorem, though even this weak statement is surprising.

- The Absurd Promise of the PCP Theorem
- Decision Problems and Complexity Classes
- The Insanity of PCPs
- The Weak PCP Theorem and Where We’re Headed

Let me drive home the absurdity of what the PCP theorem claims is possible.

Imagine you are in the business of selling rubber ducks. These innocuous items have been known on rare occasions to explode, which hasn’t been good for business. Quality control, the manufacturer, *SafeDuckie*, insists, is one form of insurance: “every once in a while, we grab a random sample of a few hundred duckies fresh off the factory lines and have them undergo extensive tests”. This is a perfectly fine way to provide you and your customers with a great statistical guarantee that, say, 99.9% of the rubber ducks you are selling, won’t explode in their user’s hot tubs. But it certainly would provide you, the retailer, with *no guarantee at all* that the batch of rubber ducks you’re about to stock your shelves with won’t contain *a single* defective rubber duck.

Are there better guarantees? The manufacturer could claim to individually test all the rubber ducks they ship out. But why should you trust them? Wouldn’t you need to run your own tests on all the items to make sure the manufacturer’s tests were done correctly? “There seems to be no way around it,” you think, “and with the number of rubber ducks I’m selling every day, I just don’t have the resources for it.” With costly lawsuits pouring in from the four corners of the galaxy blaming you in part for damages and a 0% tolerance policy seemingly too costly, you consider giving up on the lucrative rubber duck business, but in a last-ditch attempt, ask the manufacturer to find a fix.

Now, one day, as you are expecting one of the last scheduled rubber duck shippings (100 trillion rubber ducks, no less, as you are the main retailer of rubber ducks in the galaxy), a representative from *SafeDuckie*, someone you’ve known and dealt with for years and whom you trust, walks into your office all excited, hard drive in hand, and tells you to connect the device to your computer in order to grab 3 random bits from the one file it contains (a *huge* sprawling mess of a file), and to run a program from a thumb drive using these bits and their positions in the file as inputs.

Slightly taken aback you agree, grab some bits and their locations within the file, and let the program run for a few seconds. As the result appears on your screen, “PASS”, your friend pats you on your slimy back and says “Perfect! This, my friend, is a 50% guarantee that out of the 100 trillion rubber duckies I’m delivering to you today *not a single one* is of the spontaneously exploding type. “Do it again! ”PASS”! That guarantee is now more than 75%! Do it *120 times*!”. You would be forgiven for doubting these claims, but as the 120 checks pass, your friend continues: “The chance that even one duckie is bad is less than 1 in 2^{120}… less than one in a trillion trillion trillion”.

Surely your friend must be pulling your tentacle, and slightly annoyed you ask for an explanation: “I made you run a PCP verifier on my PCP proof for the decision problem **NoExplodingDuckie** on the shipping! Works like a charm, even for rubber duckies!”

Now if that business partner of yours is indeed making you run a legitimate PCP verifier on his PCP proof, then the above would be sound: you could indeed have some justifiable level of confidence (~50%) that *all* 100 trillion rubber ducks are fine, having only queried 3 bits or so from the file (the PCP proof). And if the 120 test pass, you could have great confidence (>99.9999999999999999999999999999999999%) that *all* 100 trillion rubber ducks are fine (i.e. not a single one is defective), having queried all but 360 random bits or so from the file, and done modest computations. Notice how **3 random queries somehow implicate the totality of 100 trillion rubber ducks**…

Better still, you don’t have to trust your partner (and the program on his flash drive) on this. You could design your own **NoExplodingDuckie**-verifier. The great news for you is that designing a PCP verifier for the decision problem **NoExplodingDuckie** isn’t all that hard once the proof format is established. We will see in the next post that the job of the verifier is really simple. The hardest part is computing a few matrix products.

In this post, we will address the question of what a PCP verifier is and what properties it should have. It is a kind of machine (program) designed for a specific decision problem that takes a proof object as input, tosses random coins, computes and returns either 0 or 1: **perfect proofs satisfy the verifier 100% of the time, and there are no (statistically) convincing false proofs**. Its internal workings, the proof format and the reasons behind its satisfactory behaviour, will be given in the second post.

As was said earlier, we need to lay some groundwork. The PCP theorem is a result of complexity theory, and so we begin with a refresher on it.

The central question complexity theory addresses is that of quantifying the difficulty of doing certain computations, either in absolute terms (space, time complexity) or relative to other computations (polynomial reductions, NP-completeness and other forms of completeness…). Among all computational problems, we distinguish those whose output is either 1 or 0, YES or NO, i.e. computations meant to decide whether an input has a given property or not. These problems are dubbed decision problems.

To be precise, **decision problems** are collections of mathematical problems (usually of the same type). A problem that appears in such a collection of problems is called an **instance** of that decision problem. A simple decision problem is that of parity: decide whether an integer is even or odd. Typically decision problems can be couched in convenient informal language. Here are some examples of decision problems, as well as a description of their instances (we will mention **3-GRAPH** and **3-SAT** here and there, and spend some time on **QUADEQ** in the second blog post):

**2-GRAPH**“Decide whether a given graph is 2-colourable or not”. A graph is 2 colourable if there is a way to colour all of its vertices using only 2 colours so that no two adjacent vertices (i.e. vertices linked by an edge) are of the same colour. An instance of**2-GRAPH**will simply be a finite graph G:G is a “YES” instance of**2-GRAPH**if it is 2-colourable, and a “NO” instance otherwise.**3-GRAPH**“Decide whether a graph is 3-colourable or not”. The question is the same as for**2-GRAPH**except that you now have three colours on your palette. Again, an instance of the decision problem**3-GRAPH**will be a finite graph G, with G being a “YES” instance if it is indeed 3-colourable, and a “NO” instance otherwise.**SAT**“decide whether a given boolean formula has a satisfying assignment or not”. An instance of**SAT**is a boolean formula φ(x_{1},…,x_{n}) in n boolean variables, say φ(x,y,z)=(x∨y∨¬z)∧(¬y∨z). In general, a boolean formula is constructed by combining the variables

x_{1},…,x_{n}using logical operations such as AND (∧), OR (∨), NOT (¬), XOR (+),… and a satisfying assignment is an assignment of the variables x_{1},…,x_{n}∈{0,1} to values v_{1},…,v_{n}∈{0,1} such that the expression evaluates to 1.**2-SAT**“decide whether a given boolean formula in 2-conjuctive normal form has a satisfying assignment” this is the same problem as**SAT**, except that the boolean formulas have to come in a very special form: a conjunction of clauses each containing two variables (or their negations): (¬)x_{i}∨(¬)x_{j}.**3-SAT**“decide whether a given boolean formula in 3-conjuctive normal form has a satisfying assignment” like**2-SAT**, but now clauses contain three variables (or their negations): (¬) x_{i}∨(¬) x_{j}∨(¬) x_{k}.**QUADEQ**which we will describe in detail in the second post.**COMPUTATIONAL INTEGRITY**as described in chapter 3 of Pergament Evgenya’s master thesis “Algebraic RAM”.

For greater uniformity, it is standard to restate these decision problems in the framework of “languages recognized by a Turing Machine (maybe with some constraints on run time or memory)”. We will not bother with this complication, but it exists and provides a universal context within which to express decision problems.

It is important to have some **complexity measure** for instances of a decision problem ∆. A complexity measure assigns every instance δ of ∆ some positive integer n = n(δ) meant to quantify how large/complex of an instance δ is. It will be in terms of this complexity measure that we gauge the hardness of a computation: can it be executed in polynomial time (relative to the complexity of instances), exponential time (relative to the complexity of instances), logarithmic memory (relative to the complexity of instances)…?

Complexity measures are usually very simple, such as: how many boolean variables appear in a boolean formula ? How many edges and/or vertices does a graph have? However, there are often very different *encodings* that make sense for any given problem, and usually, some complexity measures are sensible for some encoding and not for others.

Consider the simple Parity problem already mentioned. An instance of Parity might be δ = “nineteen”. We may express δ in unary as a list of nineteen symbols “1” 1111111111111111111 = 119, in binary 10011, in decimal as 19, in hexadecimal as 13, etc. The way we encode the problem impacts the complexity we may want to attach to it. It seems natural, for instance, to define the complexity of the instance “nineteen” of the parity decision problem to be 19 in the unary encoding (19 “1”s are needed), 5 if we use binary encoding, as 5 bits are needed to express 19 in that basis, and 2 in decimal and hexadecimal. Here the only distinction that really matters is between unary and all the other encodings, unary is always exponentially larger.

This example might seem a little contrived, but is less so than one might think: decision problems such as **COMPUTATIONAL INTEGRITY** (which we won’t explore) take as input some form of tuple (M,T,B,…) where M is a Turing Machine, B represents a set of “boundary constraints” imposed on the execution, and T is the number of clock cycles we are allowing, and ask for an execution trace of T clock cycles of M in line with, among other things, the constraints B. Binary encoding for T makes for exponential complexity.

Some of the Decision Problems listed above have straightforward solutions. Consider **2-GRAPH** for instance. Deciding whether a (connected, say) graph is 2-colourable is quite easy and has an efficient solution: pick a node at random, colour it red, colour all its immediate neighbours black, colour their immediate neighbours red and so on. This algorithm ends with you either colouring all the nodes and thus producing a 2 colouring, or with you finding yourself, at some stage, having to paint over a vertex you already coloured earlier using a different colour, in which case the graph isn’t 2-colourable. Runtime is that of exploring a graph.

This procedure is then extended to general finite graphs: first, you run a polynomial time algorithm to determine the components, Breadth First Search or Depth First Search for instance, and then repeat the above procedure once in every component. Solving **2-SAT** is less obvious, but there exists a very nice polynomial time algorithm deciding whether a boolean formula in 2-Conjunctive Normal Form is satisfiable; you can find a description of this algorithm here.

The decision problems Δ for which deciding the answer for any instance δ can be done in polynomial time (in the complexity) without further input populate the Complexity Class **P**. **P** stands for “Polynomial time”; being a decision problem Δ in **P** means that there is a procedure deciding YES or NO such that the number of steps needed to do the computation for an instance δ∈Δ won’t exceed P(n), where P is some (usually low degree) polynomial depending on the decision problem Δ, and n is the complexity measure of the instance.

On the other hand, there are decision problems for which finding solutions may be very hard (or not), but which have the redeeming quality that **verifying a proposed solution is relatively easy**. This is the case of **SAT**, **2-SAT**, **3-SAT**, **2-GRAPH**, **3-GRAPH**, **QUADEQ** and many others. Consider an instance φ(x_{1},…,x_{n}), an instance of **SAT**: it’s straightforward to compute the output of a boolean formula on a particular assignment of its variables, and thus to check whether a proposed assignment (ϵ_{1} ,…,ϵ_{n} )∈{0,1}^{n} is indeed a solution of the equation φ(x_{1},…,x_{n})=1. However, deciding if φ *has* a satisfying assignment potentially requires an exhaustive search of exponential size. For instance, if φ is a boolean formula in 5 variables, there are 2^{5}=32 possible assignments of its variables, represented below:

Finding a satisfying assignment seems to require a trial and error exhaustive search.

The situation is analogous for **3-GRAPH**: it might take a computer millennia to exhaust all the 3-colourings of a somewhat large graph, but if someone comes along with a 3-colouring of said graph, it can be quickly checked: go through all the edges of the graph and compare the colours at its extremities, if any of the edges are monochromatic reject the solution, and accept otherwise.

This situation occurs very frequently: hard decision problems for which positive answers (“YES”) can be succinctly justified if one has access to special data. This extra piece of data (a satisfying assignment for a boolean formula or an explicit 3-colouring, say) that witnesses to the fact that δ is a YES instance of a decision problem Δ, is called **witness data** or a **witness** (for δ).

Note that there is absolutely no requirement for uniqueness of the witness data. In the example below, 4 different execution paths lead to a positive outcome (i.e. only under four assignments of the boolean variables does a certain boolean formula evaluate to 1): w_{1 }= (0,0,0,1,1), w_{2} = (0,1,0,1,0), w_{3 }= (0,1,0,1,1) and w_{4 }= (1,1,0,0,1).

The ubiquity of these decision problems leads complexity theorists to define the complexity class **NP** which contains all such decision problems. To be precise, **NP** consists of all decision problems Δ for which there exists a deterministic Turing Machine V (a verifier) and a polynomial P_{Δ} such that

- V takes two inputs: an instance δ∈Δ and a string w, a witness, and outputs either 1 or 0
- the YES instances of Δ are precisely those for which there is witness data w such that V(δ,w)=1 within P
_{Δ}(n) steps, where n is a complexity measure of the instance δ.

One usually requires w itself to be of polynomial size (in n). However, V can only parse polynomially many characters from w in P_{Δ}(n) time steps, and so this constraint can be lifted. This idea of “relevant bits” will appear again when we discuss “relevant bits of PCP ‘proofs’” for PCP verifiers.

Note that there is no requirement that NO instances be similarly detectable in polynomial time (or using a polynomial size witness). Of course, short witness data showing that a specific instance of some decision problem Δ is a NO instance might still exist. For instance for **3-GRAPH**, if a graph G contains as a subgraph a copy of the complete graph on 4 vertices K_{4}, or more generally any graph that we already know not to be 3 colourable, then this observation constitutes a witness for G being a NO instance of **3-GRAPH**.

However, not all graphs that fail to be 3-colourable do so because they contain a copy of K_{4}. This criterion is but one out of infinitely many such criteria we could come up with that provide definitive proof of non-3-colourability in some instances, but necessarily miss most of the NO instances. And there seems to be no efficiently verifiable criterion equivalent to the failure of 3-colourability. Witness data for non-3-colourability can still be constructed by, but would presumably be immensely large: On the order of 3^{V}×ln(E) where V is the number of vertices and E the number of edges, if we were to list, for each and every 3-colourings of the vertices, one monochromatic edge.

Alternatively, **NP** can be thought of as the class of decision problems Δ solvable in polynomial time by a nondeterministic Turing Machine. Such a (program) machine has potentially multiple choices to choose from at each clock cycle. As it works ahead, it builds up a tree of branching execution paths (like the tree above representing all possibilities of affecting values to 5 boolean variables). A nondeterministic machine halts in time ≤T(n) for some time function T if its tree of execution paths contains a halting state at depth ≤T(n). The witness data (itself of polynomial size) helps it make the right choices and finish in polynomial time.

We can’t avoid the topic of **NP-completeness** altogether. We shall only say that there are, in **NP**, some problems that are essentially *maximally hard*, in the sense that there are no substantially harder problems in **NP**. **SAT** is the original example, **3-SAT** and **3-GRAPH** are other examples. To make this precise one needs the notion of polynomial reduction. We will provide a very easy such reduction later, from **3-SAT** to **QUADEQ**, establishing the fact that **QUADEQ**, too, is NP-complete.

NP-completeness will be convenient in reducing the amount of work we have to do in the second post. Rather than constructing PCP verifiers for all NP problems Δ∈**NP**, the theory of NP-completeness allows us to do it once and for all for a (well-chosen) NP-complete problem.

We now turn from the somewhat familiar (**P** and **NP**) to the potentially unfamiliar.

**PCP** complexity *classes* come in many flavours. One way of defining PCP complexity classes is in terms of four parameters: two real numbers 0≤α<β≤1 and two functions r:N→N and q:N→N, where N={0,1,2,…} is the set of nonnegative integers. The associated complexity class is written as PCP_{α,β}(r(n),q(n)). The paradigm resembles that encountered in our discussion of **NP**: there will be a **verifier**, a machine that checks an extra piece of data related to a witness. There will also be a **prover**, a machine that produces this extra information from witness data. Fittingly, this data is now called “**proof**”.

We shall describe the decision problems that belong to these PCP classes. But first, what do the parameters α,β,r(n),q(n) represent?

The first two parameters α and β represent upper and lower bounds on probabilities. They are measures of the *statistical performance of the verifier*: α quantifies how likely a verifier is to accept a correct proof (ideally α=1, i.e. correct proofs are always accepted), β quantifies how likely a verifier is to accept a false proof (ideally we want β close to 0, i.e. false proofs should only have a small chance of fooling the verifier).

The second two parameters are positive functions r(n) and q(n): they dictate the *behaviour of the verifier*. Specifically, the verifier will toss r(n) coins and do up to q(n) reads from a proof. The n that appears as the argument is a complexity measure of the instance.

As was the case for **NP** decision problems, membership of a decision problem Δ in a particular **PCP** class hinges on the existence of a special kind of “verifier” V . As in the case of NP, this verifier
will rely on extra data in its execution and must run in polynomial time. In the case of NP,
this extra data was a witness:
a perfect proof of membership of an instance δ to the set of YES instances of Δ. In the case of PCP verifiers, this extra piece of data will be a so-called proof, a string of data π provided by a prover. There is, again, no inherent restriction on the size of π, but we will see below that the verifier’s specification places tangible constraints on the number of π’s bits that are actually relevant to the verifier.

We will assume that the verifier, when it has been given a proof π, has **oracle access** to its bits (i.e. it can read any one of its bits in constant time, regardless of its position within the proof). Such a verifier with oracle access to a given proof π is written V^{π}. Furthermore, the verifier is now a randomized Turing Machine, rather than a deterministic one: the verifier will have to toss a number of random coins and use the resulting randomness to decide where to read the proof.

As previously indicated, the values r(n) and q(n) will directly affect the way the verifier is allowed to behave, indeed, the verifier Vπ, with oracle access to a proof π, on an input δ of complexity n may only:

- toss r(n) random coins (i.e. it can create r(n) random bits),
- query the proof π at q(n) places,
- use these results to do a polynomial time (in n) computation and output either 1 or 0.

The verifier uses the r(n) coin tosses to determine *which* q(n) bits from the proof to query. We assume that once it’s tossed the coins, it decides where to look in π. There are thus only O(q(n)2^{r(n)}) bits of π relevant to the verifier. Providing larger proofs is thus pointless.

In the PCP verifier we will present in the next post, these polynomial time computations will mainly be matrix products and dot products.

Let us now explain what it means for a decision problem Δ to belong to the complexity class PCP_{α,β}(r(n),q(n)). For Δ to belong to PCP_{α,β}(r(n),q(n)) **there must exist a verifier** V as above such that, for every instance δ∈Δ of complexity n:

**Completeness:**if δ is a YES instance, there exists a proof π such that P[V^{π}(δ)=1]≥α;**Soundness:**if δ is a NO instance, then for every proof π, P[V^{π}(δ)=1]≤β.

Note the presence of **probabilities** P[⋯] in this statement. These probabilities are taken with respect to the r(n) random coin tosses the verifier does: there are 2^{r(n)} equally likely possible outcomes of r(n) coin tosses, and we average over these coin tosses.

We only concern ourselves with the special case α=1 and β=1/2, and define the following, larger PCP classes,

This is, by definition, the complexity class of all decision problems Δ such that there exists a randomized Turing Machine V that reads, besides an instance δ, bits from a proof π and such that

**Completeness:**if δ is a YES instance, there exists a proof π such that, whatever coins V tosses, V^{π}accepts δ,**Soundness:**if δ is a NO instance, then for any proof π, V^{π}rejects (at least half) of the time

All the while V^{π} is only allowed to toss no more than Aln(n)+B random coins and read a *constant* number Q of bits from the proof. We can now make sense of the PCP theorem **NP** = **PCP**_{1},_{1/2}(O(ln(n),O(1)).

One of the inclusions in the PCP theorem, **PCP**_{1},_{1/2}(O(ln(n),O(1))⊂ **NP**, isn’t all that hard. The hard one is the opposition inclusion: **NP**⊂ **PCP**_{1},_{1/2}(O(ln(n),O(1)), that is, the existence of PCP provers for NP decision problems.

So how do you prove the inclusion **PCP**_{1},_{1/2}(O(ln(n),O(1)) ⊂ **NP**? Suppose you have a decision problem Δ for which there exists a PCP verifier V. Suppose V tosses at most Aln(n)+B random coins and makes at most Q queries to proofs, for some positive constants A, B and Q. There are, at most, 2^{A ln(n)+B}=O(n^{A}) possible outcomes to these coin tosses.

Now create the NP verifier W as follows: witness data is provided by a valid PCP proof π, and on input δ, W runs V_{π} for all O(nA) coin tosses. Since V_{π} runs in polynomial time, each of these computations costs at most P(n) steps (for some polynomial P), and so the execution cost of W is at most O(n^{A}P(n)), i.e. W runs in polynomial time.

Now output **Accept** if all of these tests pass, and **Reject** if even one fails. By hypothesis on the decision problem Δ having**PCP**_{1},_{1/2}(O(ln(n),O(1)) proofs, every YES instance of Δ has a witness (a PCP proof π) that leads W to accept 100% of the time, while no witness can be construed for NO instances that wouldn’t be rejected at least half of the time by the verifier. Furthermore, since W runs in polynomial time, our verifier works as an NP verifier for Δ.

Let us pause for a brief moment and talk about proofs.

The published literature of mathematical papers contains a fair share of false proofs. Mistakes like to hide in nasty computations, or in “simple”/“obvious” arguments mentioned in passing. Mishaps slip through the cracks despite papers passing many checks: scrutiny by colleagues, peer-reviews by anonymous referees, presentations at conferences…

Mathematical papers (written in the standard format: lemma, proposition, theorem, corollary and proofs) are very **fragile**: a single point of failure may prove fatal, and that singular mistake may go by completely unnoticed and unchallenged for decades. Take a look at these Mathoverflow threads if you’d like to know more about widely accepted mathematical results that were later shown to be wrong, examples of common false beliefs in mathematics and examples of interesting false proofs. This 2014 conference by Vladimir Voevodsy provides further examples from the mathematical work of a very prominent mathematician. This problem proved to be such a thorn in his side that it completely changed the course of his work.

As pointed out in Sudhan’s account, this fragility is not an inherent feature of proofs, but rather is a quirk of the way we write them. The PCP format is a proof-writing format that amplifies mistakes, to the point where mistakes propagate irreparably throughout the proof, contaminating it to such a degree that reading as little as 3 bits may prove sufficient to reject false proofs with non negligible probability.

In our day to day lives, we often have to rely on probabilistic principles. When you’re buying a bag of clementines you might feel one or two of them, discard the bag if one is soft, and base your purchase on this small sample otherwise. Of course, there might still be a rotten one in the bag.

One might try to reproduce this probabilistic verification with a bounded number of queries for a proposed solution to a decision problem. However, random sampling of NP witness data is very different from how a PCP prover works. Take the example of **3-GRAPH**, and let G be a graph, and let us try the following “PCP-proof format”:

**Proofs**: NP witnesses (say a supposed 3-colouring of a large graph G)**Verification**: query the proof at a fixed number of random places and run checks (as in: choose 1000 random edges in the graph, inquire for every such edge the colour assignment of its endpoints, and verify that none of them are of the same colour).

This will work out fine for small graphs, but completely fails as a **PCP** verifier for large graphs. True 3-colourings always pass this test, but, as the SafeDuckie retailer understood all too well, there is no uniform lower bound on the probability of a false proof being found out.

Imagine a deceptive prover that was able to 3-colour a graph with one billion edges, except for a single monochromatic edge. Maybe the graph is 3-colourable, maybe it isn’t. Having expended so much work and having come so close, the prover decides to present this flawed proof of 3-colourability of G, and hopes for the best. The verifier tosses some coins and decides where to perform 1000 random queries on edges. With overwhelming probability

, it won’t stumble on the one monochromatic edge and thus accept the proof.

This behaviour only gets worse as for graphs with a great number of edges, that are 3-colourable save for one edge: the probability of that test returning false is 1000/E≪1. And so, while such graphs G may not be 3-colourable, with overwhelming probability, the above prover will accept the false proof provided by the *almost* 3-colouring.

Thus, for such *almost yet not quite* 3-colourable graphs, a dishonest verifier can create (false) proofs of 3-colourability that the verifier will accept with arbitrarily great probability, thus violating the second axiom.

Thus, this second axiom is a surprising and hard part. The first axiom, especially when α = 1, seems like a common-sense requirement. For every YES instance of a decision problem with PCPs, *there should exist some* proof that convinces the verifier. The second axiom is where the magic happens. *Whatever* proof a dishonest prover conjures up in support of a NO instance, no matter how close that NO instance is to being a YES instance (say, a large graph 3-colourable but for one monochromatic edge, or a large boolean formula satisfiable but for one clause), it’s all the same in the eyes of the verifier! While only reading a fixed number of bits from the proof, the verifier is fooled less than half of the time.

Note that this approach only fails for graphs that are very nearly 3-colourable. In other words, this method of probabilistic testing would work if we could somehow restrict ourselves to a class of graphs that are either 3-colourable or *spectacularly fail* 3-colourability, in the sense that *whatever* 3-colouring one creates, at least 50% of all edges are monochromatic (or any other *fixed* lower bound). This kind of reduction can be done and is a strategy for proving the PCP theorem in full generality.

Unsurprisingly, the full PCP theorem as stated above is a difficult result. Its proof is sophisticated and falls well outside the scope of this short note. Fortunately for us though, a weaker form of it is within reach. This weaker statement, let’s call it the **weak PCP theorem**, states:

where

In one word: decision problems in **wPCP** may require exponential sized proofs: |π|≤Q⋅exp(n^{c}+C) for positive constants c, C and Q, while decision problems in **PCP** manage with polynomial sized proofs: |π|≤Q(n^{a}+B).

Contrasting the statement of the weak PCP theorem **NP** ⊂**PCP**_{1},_{1/2}(O(poly(n),0(1)) with the strong PCP theorem **NP** = **PCP**_{1},_{1/2}(O(ln(n),0(1)), the only difference is the number r(n) of random coin tosses a verifier is allowed to perform. Recall that this has a direct impact on the number of bits of the proof relevant to the prover. In the hard PCP theorem the relevant part of the proof has polynomial size here. However, the number of relevant bits is far greater, according to O(2^{poly(n)}). So, an important consequence of this disparity is that proofs in this weak PCP context will tend to be *much*, *much* longer. Looking ahead, PCP proofs for the particular NP-complete decision problem **QUADEQ** for an instance with n boolean variables will have size 2^{n2}+n, while NP witness data is of size n.

Looking ahead, PCP proofs for the particular NP-complete decision problem QUADEQ for an instance with n boolean variables will have size 2{n2} + n, while NP witness data is of size n.

In the second post of this series, we’ll aim to give you an idea of what goes into proving this weak PCP theorem. There is a lot of ground to cover, and proving it requires some math:

- A little familiarity with linear algebra
- Simple probability

However, the general gist of the proof isn’t all that complicated (though it is pretty ingenious).

*Interested in learning more about the PCP theorem? Stay tuned for Part 2 of this series.*

*PegaSys** is a thought leader in the Enterprise Ethereum space. Want to join the team? Check out our **list of open roles**.*