Abstracts - 2007
Verifying and Decoding in Constant Depth
Dan Gutfreund, Alexander Healy, Tali Kaufman & Guy N. Rothblum
We introduce a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receiver's computation to the (potentially malicious) sender. We use tools from cryptography that allow us to do this safely, even when the sender is malicious.
A classic example of such a sender-receiver setting is interactive proof systems. By taking the sender to be a (potentially malicious) prover and the receiver to be a verifier, we show that (p-prover) interactive proofs with k rounds of interaction are equivalent to (p-prover) interactive proofs with k+O(1) rounds, where the verifier is in NC0. That is, each round of the verifier's computation can be implemented in constant parallel time. As a corollary, we obtain interactive proof systems, with (optimally) constant soundness, for languages in AM and NEXP, where the verifier runs in constant parallel-time.
Another, less immediate, sender-receiver setting arises when considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list-)decodable by constant-depth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally list-decodable codes and average-case complexity, we obtain a new, more efficient, worst-case to average-case reduction for languages in EXP.
In this work we consider settings in which a computationally powerful but possibly malicious sender interacts with a weak and honest receiver. We explore the approach of improving the receiver's efficiency by delegating most of its computational work to the sender. It is not immediately apparent how to do this, as the sender may be malicious, and thus we introduce tools for reliably delegating computation to a potentially malicious party. We apply our approach to the settings of proof verification and error correcting codes.
Proof Verification. Proof verification is a central concept in theoretical computer science. In this setting, a computationally powerful and possibly malicious party, the prover, interacts with a weak and honest party, the verifier. The prover makes a claim (e.g., that a given graph is 3-colorable), and tries to convince the verifier that this claim is valid. The goal is to design a (possibly randomized and/or interactive) proof system such that the verifier is only convinced when the prover's claim is in fact correct, even when the prover behaves maliciously. Proof verification is perhaps the most natural example in complexity theory of a weak receiver interacting with a potentially malicious powerful sender. Here, the prover plays the part of the sender and the verifier plays the role of the weak receiver.
Seminal works in complexity theory have uncovered the striking power of interactive and randomized proof systems as introduced by Babai  and Goldwasser, Micali and Rackoff . The works of Shamir , and Babai, Fortnow and Lund , give efficient proof systems for every language that could conceivably have them. The works of Babai and Moran , Goldwasser and Sipser , Feige and Lovasz , and the PCP literature, show that interactive proof systems remain very powerful even when various limitations are placed on the verifier or on the protocol.
In this work we continue this line of research, exploring the power of proof systems with verifiers that have very limited computational resources. We show that proof systems remain powerful even when the verifier is severely weakened computationally to run in NC0 (constant parallel time).
Error-Correcting Codes. Error correcting codes are highly useful combinatorial objects. In this setting, a given message is encoded with some redundancy to obtain a codeword (longer than the original message) with the property that the original message can be recovered from any word that is close (in Hamming distance) to the codeword. The process of recovering the message from a (possibly) corrupted codeword is called decoding. Several variations on the standard notion of decoding have been considered in the literature. In some cases the codeword is corrupted in so many locations that recovering the original message becomes impossible. However, it might still be possible to recover a (short) list of candidate messages that is guaranteed to contain the original message -- this is known as list decoding. One can also consider the task of recovering only a single bit of the message (in some given location) by accessing only a small number of locations in the codeword. Codes with such local decoding properties are called locally decodable codes and locally list-decodable codes. See the survey of Trevisan  for a more detailed discussion of these notions.
In this work we focus on the efficiency of the decoder in locally decodable and locally list-decodable codes. We show explicit families of codes that are locally and locally list decodable by constant depth decoders (circuits) of size polylogarthmic in the length of the codeword.
Proof Verification. We consider proof systems with extremely weak verifiers: i.e., verifiers in NC0 (or constant parallel time). By that we mean that the verifier's strategy at each interaction round can be computed in NC0 when given access to the input, the randomness, and the messages exchanged in the previous rounds. We show that such proof systems are surprisingly powerful: essentially, anything that is provable in $k$ communication rounds with a polynomial-time verifier is also provable in k + O(1) communication rounds with an NC0 verifier. In particular, we obtain the following characterizations:
1. A language is in AM (the class of languages that have a proof system with a constant number of communication rounds) if and only if it has a single-prover, two-round, constant-soundness interactive proof that can be verified in constant parallel time. In particular, every language in NP has such a proof system.
2. A language is in NEXP if and only if it has a two-prover, five-round, constant-soundness interactive proof that can be verified in constant parallel time.
We complement our positive results with two negative results. First, we show that constant-round proof systems with an NC0 verifier cannot have sub-constant soundness (unless the language itself is in NC0). Second, we show that there is no public-coin proof system with an NC0 verifier (again, unless the language itself is in NC0). This result sheds light on private vs. public coins proof systems and in particular on our protocols (which, naturally, use private coins). In particular, it shows that both interaction and private randomness are provably essential for non-trivial NC0 verification.
Error-Correcting Codes. The application of our methodology to the setting of error-correcting codes is a novel approach to the well-studied
problem of efficient decoding. The sender embeds information in the codeword that helps speedup the decoder's computation. In our new codes, the decoder uses the received word not only for reconstructing information about the original message, but also as a computational resource. This approach allows us to transform locally (list-)decodable codes with NC1 decoders into codes that have AC0 decoders (with similar or slightly worse parameters). We then construct explicit locally-decodable and locally list-decodable codes with NC1 decoders, and by applying our general transformations to these explicit codes, we obtain the following results:
1. An explicit binary code with polynomial rate and (probabilistic) AC0 local-decoding from a word that is corrupted in a constant fraction of locations. This code has roughly the same parameters as the canonical example of a locally-decodable binary code with polynomial rate (see ).
2. An explicit family of (non-binary) codes that are locally list-decodable from agreement ? with list size poly(1/?) by probabilistic AC0 circuits of size poly(log M/?), where M is the length of the message. The alphabet size and codeword length of these codes match the recent construction of Impagliazzo, Jaiswalss . (Indeed, our construction is based on the approximate codes of .)
This research was supported by NSF grant CNS-0430450 and NSF grant CFF-0635297.
 L. Babai. Trading Group Theory for Randomness. STOC 1985: 421-429.
 L. Babai, L. Fortnow and C. Lund. Non-Deterministic Exponential Time has Two-Prover Interactive Protocols. FOCS 1990: 16-25.
 L. Babai and S. Moran, Arthur-Merlin Games: A Randomized Proof System, and a Hierarchy of Complexity Classes, Journal of Computer and System Sciences 36:254-276, 1988.
 U. Feige and L. Lovasz, Two-prover one-round proof systems, their power and their problems. STOC 1992: 733-744.
 S. Goldwasser, S. Micali and C. Rackoff. The Knowledge Complexity of Interactive Proof-Systems. SIAM Journal on Computing 18(1):186-208, 1989.
 S. Goldwasser and M. Sipser, Private coins versus public coins in interactive proof systems. Advances in Computing Research 5:73-90, 1989.
 Russell Impagliazzo, Ragesh Jaiswal and Valentine Kabanets, Approximately List-Decoding Direct Product Codes and Uniform Hardness Amplification. FOCS 2006: 187-196.
 A. Shamir, IP = PSPACE. FOCS 1990: 11-15.
 M. Sudan, L. Trevisan and S. Vadhan, Pseudorandom generators without the XOR Lemma, STOC 1990: 537-546,
 L. Trevisan, Some Applications of Coding Theory in Computational Complexity. Quaderni di Matematica 13: 347-424, 2004.