Lecture 16
In-class notes: CS 505 Spring 2025 Lecture 16
Error Reduction for BPP
Recall that a language is in BPP if there exists a strict polynomial time probabilistic Turing machine such that for any , . Equivalently stated, for a deterministic Turing machine running in polynomial time, it holds that for all , where .
Here, the error probability is convenient, but arbitrary. We’ll see that the class BPP is equivalently defined for any error probability at least .
Definition. For any , the class is the set of all languages such that there exists a probabilistic Turing machine running in strict polynomial time such that for all , .
Under the above definition, we can set for constant for , or even . We’ll show that this BPP class with reduced error is equivalent to the standard BPP definition.
Theorem. For all constants , it holds that .
Proof. Clearly, since for all and large enough . Next, we show the other direction, namely . We show this via the following claim.
Claim. Suppose is decidable by a PTM (in polynomial time) such that for all , for some constant . Then there exists a PTM running in polynomial time and a constant such that for all , .
We won’t show the full proof, but just the main ideas. The idea is that the machine will simply run for some polynomial number of times, then output the majority of the outputs. Let . Then, will independently run times. Let be the output bits of the independent runs of . Then, outputs , where if of the bits are ; otherwise, . It can be shown using the Chernoff bound that if , then under the parameters we’ve set. Notice also that runs in polynomial time since it runs a polynomial number of times.
With the claim, for large enough , it holds that , which shows that whenever .
BPP vs. Other Classes
Now, we examine the relationship between BPP and other complexity classes.
BPP vs P
First, we naturally ask: what is the relationship between P and BPP? Clearly, since any deterministic Turing machine is also probabilistic (you can either set , or have ignore the random input ). Many complexity theorists actually believe that , which concerns the rich field of derandomization and hardness amplification. But we will not examine these fields in this course.
BPP vs PH
At first glance, it is not clear what the relationship between the polynomial hierarchy and BPP is. It turns out that BPP sits low in the polynomial hierarchy.
Theorem. .
Proof. Note that since , it suffices to show either or . We show that .
Let (which is equivalent to BPP by the previous theorem and claim). Using the DTM definition of BPP, there exists a DTM running in polynomial time such that for all , it holds that , where for some polynomial .
Let for . We define a set as the set of all good strings for . That is, ; i.e., the set of all such that . Otherwise, if , we say that is bad for .
- Notice that if , it holds that . This is because when , and thus there must be at least strings such that .
- If , it holds that . This is because if , then , so there is at most fraction of strings such that .
Now, the goal is to encode the set as a statement. We’ll need the following tool. For any and any vector , define . Now set .
Claim 1. If , then for all , it holds that
Proof of Claim 1. Notice that for any , we have . Then by a simple Union bound, we have
Claim 2. If then there exists such that
Proof of Claim 2. We use the probabilistic method. If we can show that for uniformly and independently sampled , then there must exist vectors such that For , let denote the “bad event” that . We show that .
Consider for any . We show that . Let denote the event that . Equivalently stated, . Notice that .
Now, since is uniformly sampled, we know that is uniformly distributed in . So we know that , which implies that . So . Finally, all are independent, so we have , where the last inequality follows since . This implies again by the Union bound that , which implies that so there exists vectors such that
Now, given the two claims above, we can now decide using a machine as follows. For any , define the machine which operates as follows. outputs if and only if , where . Therefore, if and only if . Thus, .
Randomized Reductions
We can define a slightly weaker notion of reduction than the polynomial time reductions we’ve seen before. We’ll see randomized reductions now.
Definition. For languages , we say that is randomized polynomial-time reducible to , denoted by , if there exists a polynomial time probabilistic Turing machine such that for all , we have .
Note that randomized reductions are not transitive! That is, if and , it is not necessarily the case that . However, randomized reductions are still useful. One can show that if and , then .
NP under Randomized Reductions?
We can define an NP-like class for NP under randomized reductions. This is the class . Note that we can equivalently define as .
Generally speaking, complexity theorists believe that . They also do not believe that because of the following lemma.
Lemma. If , then .
Randomized Space-bounded Computations
We can also examine space-bounded computations through the lens of probabilistic Turing machines. The most interesting space-bounded randomized computations are those which only use logarithmic space.
Definition. The class is the set of all languages such that there exists a strict polynomial-time PTM using additional space for inputs of length at most such that .
BPL is the log-space equivalent of BPP, and we can similarly log-space equivalents of RP, coRP, and ZPP, denoted as RL, coRL, and ZPL.
Theorem.
- .
- .
Boolean Circuits
We will now turn our attention to Boolean circuits, or just circuits. Circuits are inherently a non-uniform model of computation. That is, circuits have a fixed input length, rather than being able to operate over infinitely many input lengths. For example, Turing machines are a uniform computation model, where a single Turing machine takes infinitely many inputs .
Circuits, on the other hand, can only operate over a fixed input length. For example, a circuit computes some function over inputs for a fixed , and every has some different circuit.
Definition. A Boolean circuit of size with -bit inputs is a directed acyclic graph on vertices with the following syntax.
- The input vertices have in-degree and unlimited out-degree.
- The remaining non-input nodes, which we call gates, are all labeled AND, OR, and NOT (corresponding to the Boolean functions AND, OR, NOT) and operate as follows.
- AND and OR gates both have in-degree 2 (or fan-in 2) out-degree 1 (fan-out 1).
- NOT gates have in-degree and out-degree 1.
- There is a single output gate with out-degree 0 (note this gate can be an input node/gate, or any internal node).
Circuit Families
Since circuits are non-uniform, one circuit cannot decide an entire language (unless only contains strings of one fixed length). Thus, we need to define circuit families to handle variable length inputs.
Definition. Let be a function. A -sized circuit family is a sequence of circuits such that and has input gates for all . We say that a language is in the class if there exists an -sized circuit family which decides ; that is, for all and , if and only if .
Examples.
-
The unary language lies within ; that is, . Moreover, any unary language is in .
-
For , we have .