Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Lecture 15

In-class notes: CS 505 Spring 2025 Lecture 15

Zero-Sided Error

Last time, we discussed BPP, RP, and coRP, which are 3 (worst-case) probabilistic complexity classes.

  • BPP is the set of all languages decidable in strict polynomial-time by a PTM such that . BPP is a two-sided error class.
  • RP is the set of all languages decidable in (again) strict polynomial-time by a PTM such that implies and implies . RP is a one-sided error class, where it never has false positives (i.e., outputs when the answer is ).
  • coRP is the set of all languages decidable in (again) strict polynomial-time by a PTM such that implies and implies . coRP is a one-sided error class, where it never has false negatives (i.e., outputs when the answer is ).

Now we turn to zero-sided error. Intuitively, zero-sided means that a PTM always outputs correctly; that is, . You would be correct in thinking that if this happens in strict polynomial-time, then this class would just be P. So, in order to not have the same class, the probabilistic class of languages decidable with zero-sided error is relaxed to have PTMs that run in expected polynomial time. This is the class ZPP.

Definition. The class is the set of all languages decidable on a PTM running in expected time such that for all . In particular, if is the random variable for the runtime of on input , then if and only if and for all . The class ZPP is the set of all languages decidable in expected polynomial-time with zero-sided error; i.e.,

Note

ZPP vs RP and coRP

Since we have deviated from strict polynomial time to expected polynomial time, one may wonder how ZPP relates to RP and coRP. The following theorem exactly captures this relationship.

Theorem.

Proof. We show both directions. First, we show that . Let . Let be the RP machine and be the coRP machine. In particular, the following hold.

We construct a new PTM which will decide with zero-sided error in expected polynomial time. on input does the following.

    • While true:
      • Run and to completion.
      • If they both output (accept), then output (accept).
      • If they both output (reject), then output (reject).

First, we show that (eventually) if halts then . Notice that will always halt. This can be seen as follows.

  • For ,
    • . In particular, we will never have in this case. will run until , which happens with probability at least . In this case . Since the probability is not zero, there is a series of random choices that can make that will make it output , so will halt and output .
  • If ,
    • . So we have the reverse of above: will never happen. So can only output in this case, and it will do so eventually since in this case.

Now, we argue that runs in expected polynomial time. Let be a polynomial such that and both run in time at most for inputs of length . We analyze the expected running time of on input . Let denote the random variable for the runtime of on input . First, notice that for every iteration of the loop, runs in time at most to run both and to completion, assuming that . So if we are in the th iteration of the loop, at the end of the loop, will have run for steps.1

So to analyze the expected runtime of , we have Now, we analyze . This probability is identical to the probability that halts after the th loop. We analyze this probability, starting with .

  • . In this case, halts after one execution of and . This means after one execution, they are in agreement. For both and , this happens with probability at least . Without loss of generality, through the remainder of the proof we assume that the probability this happens is exactly .

  • . In this case, the first execution of the loop resulted in , and the second execution has . Implicitly, we have assumed that does not reuse randomness in every subsequent execution of the loop, so each run of and are independent of previous runs. Now, the probability that for any is (at most) . So in this case, the probability that halts after loops is equal to .

Extending the above analysis to any gives us This then tells us

where the last equality can be shown using infinite sum tricks. So, we have shown that runs in expected polynomial time . Thus, .

Now for the other (easier) direction, we show that . For this, we will need a result known as Markov’s Inequality.

Markov’s Inequality states that if you have a non-negative random variable , then for any , it holds that We’ll use this inequality to show .

Let be the ZPP PTM that decides . Suppose that decides in expected polynomial time for inputs of length . In particular, if and only if , and runs in expected time for any .

We construct a new PTM which does the following.

    • Compute .
    • Run for at most steps.
      • If halts within steps, output whateer outputs.
    • Output .

We show that is the RP machine deciding . First, we show that if , then . Notice that if halts within steps, then by definition of ZPP. Then, if does not halt within steps, the machine otuputs . So in either case, and thus .

Now assume that . We need to show that . By definition of , if and only if within steps. In particular, we know that since , so we must show that halts within steps with probability at least .

Let denote the random variable for the runtime of . By definition, . Applying Markov’s inequality, set . Then, we have

This tells us showing that .

Now, to show that , we construct PTM identically to the PTM , except the machine outputs if does not halt within steps. The analysis is identical to the above analysis. Therefore, and thus .


  1. Technically speaking, to run and , needs time for universal simulation, but we can simply upper bound this by another polynomial and the analysis remains the same.