Lecture 10
In-class notes: CS 505 Spring 2025 Lecture 10
is -hard
Last time, we established that . Now, to show is -complete, we show that it is -hard. That is, , we show .
Let be any language in , and let be the decider for . Suppose that for any input of length , uses at most space for , where is a constant. Recall the configuration graph (see Lecture 9) of a Turing machine . We know that the configuration graph has at most nodes, and each configuration requires bits. By the facts we established last lecture about the configuration graph , we know that if and only if there is a path from the starting configuration to an accepting configuration in the graph . Moreover, there exists an sized formula such that if and only if are valid configurations of and follows from under the transition function of .
For our reduction, our goal will be to take and transform it into a QBF such that if and only if . By our above discussion, we will utilize the configuration graph. The idea will be to construct a QBF that is true if and only if there exists a path from the starting state to the accepting state in the configuration graph .
First Attempt. Let be any directed graph. Suppose we consider two vertices and in such that there is a path from to of length at most for some . Then, there must exist another vertex such that there is a path from to of length at most , and a path from to of length at most . If this wasn’t true; that is, there did not exist such a vertex , then any path from to would need to be of length at least .
Let’s try to build a QBF recursively to take advantage of the above ideas. Let ; i.e., the formula for testing adjacent configurations. Our goal will be to construct (the final QBF, where ; that is, log of the number of nodes in the configuration graph ). In particular, we want to have the property that if and only if there exists a path from the starting configuration to an accepting configuration .
There is actually a simple way to define using our fact about paths between vertices of length at most . For any two configurations , define the formula . Here, the formula if and only if there is a path from to of length at most . We build this formula recursively by saying if and only if there exists a vertex/configuration such that there is a path from to of length at most and a path from to of length at most . Recursively, the formula checks this statement. Finally, when the recursion bottoms out, it reduces to checking if two configurations are adjacent.
If we analyze the size of these formulas, notice that by construction. Then, . Recursively, we have . This gives our final formula size at least . So the formula is too big! It requires exponential space.
Insight: Define an Equivalent QBF. Our final formula had exponential size because we were recursively checking two sub-formulas. This doubled the formula length at each recursion. However, we can take advantage of Boolean logic to define a formula that is equivalent to but only requires a single recursive call to the formula . We define the formula first then explain what each component is doing.
First, is still the target vertex we want to check. That is, we still want to check if there is a path from to of length at most and a path from to of length at most . Now, instead of calling twice and taking the and of the results, we introduce the . What is this doing exactly?
Consider the expression with in the square brackets. This expression evaluates to true if and only if and , or and . In the first case, when and , we check if is true. Great! This is exactly one of the checks we want to perform. Then, in the second case, when and , we again perform the check we want: .
Now, what about for all other values of ? Well, recall that for the Boolean function “,” we know that always evaluates to True, no matter what is. So whenever and are not the target pairs of vertices, the expression trivially evaluates to true. This is fine since we are not checking the distance between these arbitrary variables. However, for the variables we explicitly want to check, the expression will be true if and only if evaluates to true. This gives us the formula we want!
To wrap up the proof, we let , where can be any accepting configuration and is the unique starting configuration. Then, is a QBF which evaluates to if and only if there is a path from to some of length at most (in particular, the exact number of vertices). This happens if and only if .
Finally, we analyze the size of the formula . Notice that . Since , we have that , which is polynomial in since is a polynomial in .
Notice in the proof we actually didn’t use the fact that was a deterministic Turing machine. In fact, the above proof holds even if is a non-deterministic Turing machine. Thus, we have actually shown that is -hard. And since is also a language in , we actually showed that is -complete. This shows the two classes are equal.
Theorem. .
This is a somewhat surprising result since we do not believe the same is true for polynomial-time computations; i.e., we do not believe that equals .
Savich’s Theorem
We can actually show something more fine-grained about deterministic space versus non-deterministic space. The following result would equally show that .
Theorem (Savich’s Theorem). For all space constructible functions , we have .
Proof. We will again take advantage of the configuration graph of a Turing machine that we have been using. Let with corresponding NTM using at most additional space on its worktapes. Let be its corresponding configuration graph with at most nodes for any of length . Recall also that if and only if there is a path in from the start configuration to some accepting configuration.
Our goal will be to construct a deterministic Turing machine which decides using space. The machine will operate as follows.
:
- Simulate by traversing the graph .
- Traversal will utilize a recursive procedure which returns if and only if there is a path from node to node of length at most .
- The recursion utilizes the same fact we had in the proof that is -complete. IN particular, if and only if there exists such that and .
- Suppose has nodes for .
For all accepting states , run .
If this procedure outputs , output .
If none of the calls output , output .
- To run this procedure, each recursive call simply runs over all nodes (this requires bits) and checks if and
- if and only if the edge is in .
Notice that the recursive procedure bottoms out after calls. During each call, we store bits for the current vertex being enumerated over. The machine is performing a depth-first search of the graph . At the bottom of the recursion, space is used. Therefore, uses at most space.
Alternate Proof of
We can use Savich’s Theorem as an alternate proof of this result. Recall that Notice that for any space constructible , we have since all deterministic computations are also non-deterministic. By Savich’s Theorem, we have . Finally, all polynomial functions are space constructible. So this implies for all This shows .
-Completeness
Recall that is the set of all languages decidable on an NTM using at most additional space for inputs of length . We showed last lecture that . Today, we’ll see that is the essence of . That is, is -complete, where Note however that we do not know if , though we believe it to not be the case, otherwise .
Before we show that is -complete, we need a new notion of reducibility that is not polynomial-time. Let’s see why this is the case. Suppose . Then, . That is, any two languages are polynomial-time reducible to each other in . Intuitively, this is because and the reduction trivially has more power than problems in or since it is limited to be polynomial-time only and is not restricted on space. So because we can decide using an NTM using at most space, the reduction can simply compute the configuration graph of the NTM deciding on input , decide if , then produce any instance , all in polynomial time.
Therefore, we need to restrict the power of the reductions for completeness in and . This leads us to logspace reductions.
Definition. Let . We say that is (implicitly) logspace computable if there exists a constant such that for all , and the following languages are in .
We can now define logspace reducible.
Definition. Let and be any language. We say that is logspace reducible to , denoted as , if there exists a logspace computable function such that if and only if .
This finally let us define and -completeness.
Definition. We say that a language is -complete (respectively, -complete) if
- (resp., ); and
- (resp., ), we have .
As with polynomial-time reductions, we have a “transitive” property of logspace reductions.
Theorem.
- If and , then .
- If and then .
Part (2) of the above theorem tells us that if then .
is -complete
We can finally prove that is the “essence” of .
Theorem. is -complete.
Proof. We have already shown that . We now have to show that it is -hard. That is, show that for any . Our good friend the configuration graph will help us yet again.
Let and let be the non-deterministic logspace decider for . This means for any , we have if and only if using at most space.
Our logspace reduction will simply construct the configuration graph . That is, on input , the reduction will output the tuple , where is an accepting configuration. Note that for any .
Recall by definition of our configuration graph, we know that if and only if there is a path from to in . This is precisely a problem instance of . Now, we can represent as an adjacency matrix of size . Entry if and only if there is an edge from to in .
Now, we can check this in logspace. Given , there exists a deterministic machine to check of follows from according to ’s transition function. We can do this in space . By our previous discussions on the configuration graph, these configurations need at most space to represent. Thus, we can do this check in space. Therefore, we have that . So this is a valid logspace reduction.
This completes the proof as we have already encoded into the problem on the instance .