Author(s): Holger Dell (U Wisconsin, Madison), Valentine Kabanets (Simon Fraser U.), Dieter van Melkebeek (U Wisconsin, Madison), Osamu Watanabe (Tokyo Institute of Technology).
Abstract: The Valiant-Vazirani Isolation Lemma [TCS, vol. 47, pp. 85{93, 1986] provides an efficient procedure for isolating a satisfying assignment of a given satisfiable circuit: given a Boolean circuit C on n input variables, the procedure outputs a new circuit C' on the same n input variables with the property that the set of satisfying assignments for C' is a subset of those for C, and moreover, if C is satis able then C' has exactly one satisfying assignment. The Valiant-Vazirani procedure is randomized, and it produces a uniquely satisfiable circuit C' with probability > \Omega(1/n). Is it possible to have an efficient deterministic witness-isolating procedure? Or, at least, is it possible to improve the success probability of a randomized procedure to >\Omega(1)?We argue that the answer is likely `No'. More precisely, we prove that there exists a non-uniform randomized polynomial-time witness-isolating procedure with success probability bigger than 2/3 if and only if NP\subset P/poly. Thus, an improved witness-isolating procedure would imply the collapse of the polynomial-time hierarchy. We establish similar results for other variants of witness isolation.
Finally, we consider a black-box setting of witness isolation (generalizing the setting of the Valiant-Vazirani Isolation Lemma), and give the upper bound O(1/n) on the success probability for a natural class of randomized witness-isolating procedures.
Author(s): Yuichi Yoshida (Kyoto University and Preferred Infrastructure, Inc.)
Abstract: Let H be an undirected graph. In the List H-Homomorphism Problem, given an undirected graph G with a list constraint L(v) \subseteq V(H) for each variable v \in V(G), the objective is to find a list H-homomorphism f:V(G) \to V(H), that is, f(v) \in L(v) for every v \in V(G) and (f(u),f(v)) \in E(H) whenever (u,v) \in E(G). We consider testing list H-homomorphism: given a map f:V(G) \to V(H) as an oracle access, the objective is to decide with high probability whether f is a list H-homomorphism or far from any list H-homomorphisms. The efficiency of an algorithm is measured by the number of accesses to f. In this paper, we classify graphs H with respect to the query complexity for testing list H-homomorphisms, and we show that the following trichotomy holds: (i) List H-homomorphisms are testable with a constant number of queries if and only if H is a reflexive complete graph or an irreflexive complete bipartite graph. (ii) List H-homomorphisms are testable with a sublinear number of queries if and only if H is a bi-arc graph. (iii) Testing list H-homomorphisms requires a linear number of queries if H is not a bi-arc graph.
Author(s): Sangxia Huang (KTH Royal Institute of Technology), Pinyan Lu (Microsoft Research Asia)
Abstract: Holant is a framework of counting characterized by local constraints. It is closely related to other well-studied frameworks such as #CSP and Graph Homomorphism. An effective dichotomy for such frameworks can settle simultaneously the complexity of all combinatorial problems expressible in that framework. Both #CSP and Graph Homomorphism can be viewed as sub-families of Holant with the additional assumption that the equality constraints are always available. Other sub-families of Holant such as Holant^* and Holant^c problems, in which we assume some specific sets of constraints to be freely available, were also studied. The Holant framework becomes more expressive and contains more interesting tractable cases with less or no freely available constraint functions, while, on the other hand, it also becomes more challenging to obtain a complete characterization of its time complexity. Recently, complexity dichotomy for a variety of sub-families of Holant such as #CSP, Graph Homomorphism, Holant^* and Holant^c were proved. The dichotomy for the general Holant framework, which is the most desirable, still remains open. In this paper, we prove a dichotomy for the general Holant framework where all the constraints are real symmetric functions. This setting already captures most of the interesting combinatorial problems defined by local constraints, such as (perfect) matching, independent set, vertex cover and so on. This is the first time a dichotomy is obtained for general Holant Problems without any auxiliary functions.One benefit of working with Holant framework is some powerful new reduction techniques such as Holographic reduction. Along the proof of our dichotomy, we introduce a new reduction technique, namely realizing a constraint function by approximating it. This new technique is employed in our proof in a situation where it seems that all previous reduction techniques fail, thus this new idea of reduction might also be of independent interest. Besides proving dichotomy and developing new technique, we also obtained some interesting by-products. We prove a dichotomy for #CSP restricting to instances where each variable appears a multiple of d times for any d. We also prove that counting the number of Eulerian-Orientations on 2k-regular graphs is #P-hard for any k>= 2.
Author(s): Or Meir (Stanford University)
Abstract: The PCP theorem (Arora et. al., J. ACM 45(1,3)) asserts the existence of proofs that can be verified by reading a very small part of the proof. Since the discovery of the theorem, there has been a considerable work on improving the theorem in terms of the length of the proofs, culminating in the construction of PCPs of quasi-linear length, by Ben-Sasson and Sudan (SICOMP 38(2)) and Dinur (J. ACM 54(3)).One common theme in the aforementioned PCP constructions is that they all rely heavily on sophisticated algebraic machinery. The aforementioned work of Dinur (J. ACM 54(3)) suggested an alternative approach for constructing PCPs, which gives a simpler and arguably more intuitive proof of the PCP theorem using combinatorial techniques. However, this combinatorial construction only yields PCPs of polynomial length, and is therefore inferior to the algebraic constructions in this respect. This gives rise to the natural question of whether the proof length of the algebraic constructions can be matched using the combinatorial approach.
In this work, we provide a combinatorial construction of PCPs of length n*log(n)^O(loglog(n)) , coming very close to the state of the art algebraic constructions, whose proof length is n*polylog(n). To this end, we develop a few generic PCP techniques which may be interesting in their own right.
It should be mentioned that our construction does use low degree polynomials at one point. However, our use of polynomials is confined to the construction of error correcting codes with a certain simple multiplication property, and it is conceivable that such codes can be constructed without the use of polynomials.
Author(s): Chi-Jen Lu (Academia Sinica)
Abstract: We consider the problem of constructing hitting set generators for sparse multivariate polynomials over any finite fields. Hitting set generators, just as pseudorandom generators, play a fundamental role in the study of derandomization. Pseudorandom generators with a logarithmic seed length are only known for polynomials of constant degrees. On the other hand, hitting set generators with a logarithmic seed length are known for polynomials of larger degrees, but only over fields which are much larger than the degrees. Our main result is the construction of a hitting set generator with a seed length of $O(\log s)$, which works for $s$-term polynomials of any degrees over any finite fields of constant size. This gives the first optimal hitting set generator which allows the fields to be smaller than the degrees of polynomials. For larger fields, of non-constant size, we provide another hitting set generator with a seed length of $O(\log (sd))$, which works for $s$-term polynomials of any degree $d$, as long as $d$ is slightly smaller than the field size.
Author(s): Ran Raz (Weizmann Institute), Ricky Rosen (Princeton University)
Abstract: The parallel repetition theorem states that for any Two Prover Game with value at most 1- epsilon (for epsilon<1/2), the value of the game repeated n times in parallel is at most (1-epsilon^3)^(Omega(n/s)), where s is the length of the answers of the two provers [Raz STOC '95, Holenstein STOC '07]. For Projection Games, the bound on the value of the game repeated n times in parallel was improved to (1-epsilon^2)^(Omega(n)) [Rao STOC '08] and this bound was shown to be tight [Raz FOCS '08].In this paper we study the case where the underlying distribution, according to which the questions for the two provers are generated, is uniform over the edges of a (bipartite) expander graph.
We show that if lambda is the (normalized) spectral gap of the underlying graph, the value of the repeated game is at most (1-epsilon^2)^(Omega(c(lambda) n/ s)), where c(lambda) = poly(lambda); and if in addition the game is a projection game, we obtain a bound of (1-epsilon)^(Omega( c(lambda) n)), where c(lambda) = poly(lambda), that is, a strong parallel repetition theorem (when lambda is constant).
This gives a strong parallel repetition theorem for a large class of two prover games.
Author(s): Elena Grigorescu and Chris Peikert (Georgia Institute of Technology)
Abstract: The question of *list decoding* error-correcting codes over finite fields (under the Hamming metric) has been widely studied in recent years. Motivated by the similar discrete structure of linear codes and *point lattices* in R^N, and their many shared applications across complexity theory, cryptography, and coding theory, we initiate the study of list decoding for lattices. Namely: for a lattice L in R^N, given a target vector r in R^N and a distance parameter d, output the set of all lattice points w in L that are within distance d of r.In this work we focus on combinatorial and algorithmic questions related to list decoding for the well-studied family of *Barnes-Wall* lattices. Our main contributions are twofold:
1. We give tight (up to polynomials) combinatorial bounds on the worst-case list size, showing it to be polynomial in the lattice dimension for any error radius bounded away from the lattice's minimum distance (in the Euclidean norm).
2. Building on the *unique* decoding algorithm of Micciancio and Nicolosi (ISIT '08), we give a list-decoding algorithm that runs in time polynomial in the lattice dimension and worst-case list size, for any error radius. Moreover, our algorithm is highly parallelizable, and with sufficiently many processors can run in parallel time only *poly-logarithmic* in the lattice dimension.
In particular, our results imply a polynomial-time list-decoding algorithm for any error radius bounded away from the minimum distance, thus beating a typical barrier for error correcting codes posed by the Johnson radius.
Author(s): Markus Bläser (Saarland University), Bekhan Chokaev (Moscow State University)
Abstract: We prove that an associative algebra A has minimal rank if and only if the Alder-Strassen bound is also tight for the multiplicative complexity of A, that is, the multiplicative complexity of A is 2 dim A - t_A, where t_A denotes the number of maximal twosided ideals of A. This generalizes a result by E. Feig who proved this for division algebras. Furthermore, we show that if A is local or superbasic, then every optimal quadratic computation for A is almost bilinear.
Author(s): Gus Gutoski (University of Waterloo), Xiaodi Wu (University of Michigan)
Abstract: This paper presents an efficient parallel algorithm for a new class of min-max problems based on the matrix multiplicative weights update method. Our algorithm can be used to find near-optimal strategies for competitive two-player classical or quantum games in which a referee exchanges any number of messages with one player followed by any number of additional messages with the other. This algorithm considerably extends the class of games which admit parallel solutions, demonstrating for the first time the existence of a parallel algorithm for a game in which one player reacts adaptively to the other. As a consequence, we prove that several competing-provers complexity classes collapse to PSPACE such as QRG(2), SQG and two new classes called DIP and DQIP. A special case of our result is a parallel approximation scheme for a new class of semidefinite programs whose feasible region consists of n-tuples of semidefinite matrices that satisfy a "transcript-like" consistency condition. Applied to this special case, our algorithm yields a direct polynomial-space simulation of multi-message quantum interactive proofs resulting in a first-principles proof of QIP=PSPACE.
Author(s): Troy Lee (Centre for Quantum Technologies), Jérémie Roland (Université Libre de Bruxelles)
Abstract: We show that quantum query complexity satisfies a strong direct product theorem. This means that computing k copies of a function with less than k times the quantum queries needed to compute one copy of the function implies that the overall success probability will be exponentially small in k. For a boolean function f we also show an XOR lemma---computing the parity of k copies of f with less than k times the queries needed for one copy implies that the advantage over random guessing will be exponentially small. We do this by showing that the multiplicative adversary method, which inherently satisfies a strong direct product theorem, is always at least as large as the additive adversary method, which is known to characterize quantum query complexity.
Author(s): André Chailloux (UC Berkeley, California), Or Sattath (Hebrew University of Jerusalem)
Abstract: We study variants of the canonical Local-Hamiltonian problem where, in addition, the witness is promised to be separable. We define two variants of the Local-Hamiltonian problem. The input for the Separable-Local-Hamiltonian problem is the same as the Local-Hamiltonian problem, i.e. a local Hamiltonian and two energies a and b, but the question is somewhat different: the answer is YES if there is a separable quantum state with energy at most a, and the answer is NO if all separable quantum states have energy at least b. The Separable-Sparse-Hamiltonian problem is defined similarly, but the Hamiltonian is not necessarily local, but rather sparse. We show that the Separable-Sparse-Hamiltonian problem is QMA(2)-Complete, while Separable-Local-Hamiltonian is in QMA. This should be compared to the Local-Hamiltonian problem, and the Sparse-Hamiltonian problem which are both QMA-Complete. To the best of our knowledge, Separable-SPARSE-Hamiltonian is the first non-trivial problem shown to be QMA(2)-Complete.
Author(s): Gil Cohen and Ran Raz (Weizmann Institute), Gil Segev (MSR Silicon Valley)
Abstract: Motivated by the classical problem of privacy amplification, Dodis and Wichs [STOC09] introduced the notion of a non-malleable extractor, significantly strengthening the notion of a strong extractor.A non-malleable extractor is a function $\nmExt : \{0,1\}^n \times \{0,1\}^d \rightarrow \{0,1\}^m$ that takes two inputs: a weak source $W$ and a uniform (independent) seed $S$, and outputs a string $\nmExt(W,S)$ that is nearly uniform given $S$ as well as $\nmExt(W, S')$ for any seed $S' \neq S$ that is determined as an arbitrary function of $S$.
The first explicit construction of a non-malleable extractor was recently provided by Dodis, Li, Wooley and Zuckerman [FOCS11]. Their extractor works for any weak source with min-entropy rate $1/2 + \delta$, where $\delta > 0$ is an arbitrary constant, and outputs up to a linear number of bits, but suffers from two drawbacks. First, the length of its seed is linear in the length of the weak source (which leads to privacy amplification protocols with high communication complexity). Second, the construction is conditional: when outputting more than a logarithmic number of bits (as required for privacy amplification protocols) its efficiency relies on a longstanding conjecture on the distribution of prime numbers.
In this paper we present an unconditional construction of a non-malleable extractor with short seeds. For any integers $n$ and $d$ such that $2.01 \cdot \log{n} \le d \le n$, we present an explicit construction of a non-malleable extractor $\nmExt \colon \{0,1\}^n \times \{0,1\}^d \rightarrow \{0,1\}^m$, with $m=\Omega(d)$, and error exponentially small in $m$. The extractor works for any weak source with min-entropy rate $1/2 + \delta$, where $\delta > 0$ is an arbitrary constant.
Moreover, our extractor in fact satisfies an even more general notion of non-malleability: its output $\nmExt(W,S)$ is nearly uniform given the seed $S$ as well as the values $\nmExt(W, S_1), \ldots, \nmExt(W, S_t)$ for several seeds $S_1, \ldots, S_t$ that may be determined as an arbitrary function of $S$, as long as $S \notin \{S_1, \ldots, S_t\}$.
By instantiating the framework of Dodis and Wichs with our non-malleable extractor, we obtain the first 2-round privacy amplification protocol for min-entropy rate $1/2 + \delta$ with asymptotically optimal entropy loss and poly-logarithmic communication complexity. This improves the previously known 2-round privacy amplification protocols: the protocol of Dodis and Wichs whose entropy loss is not asymptotically optimal, and the protocol of Dodis, Li, Wooley and Zuckerman whose communication complexity is linear.
Author(s): Noga Alon (Tel Aviv University), Amir Shpilka (Technion), Christopher Umans (Caltech)
Abstract: We present several variants of the sunflower conjecture of Erdos and Rado [ER60] and discuss the relations among them.We then show that two of these conjectures (if true) imply negative answers to questions of Coppersmith and Winograd [CW90] and Cohn et al [CKSU05] regarding possible approaches for obtaining fast matrix multiplication algorithms. Specifically, we show that the Erdos-Rado sunflower conjecture (if true) implies a negative answer to the "no three disjoint equivoluminous subsets'" question of Coppersmith and Winograd [CW90]; we also formulate a "multicolored'" sunflower conjecture in Z_3^n and show that (if true) it implies a negative answer to the "strong USP'" conjecture of [CKSU05] (although it does not seem to impact a second conjecture in [CKSU05] or the viability of the general group-theoretic approach). A surprising consequence of our results is that the Coppersmith-Winograd conjecture actually implies the Cohn et al. conjecture.
The multicolored sunflower conjecture in Z_3^n is a strengthening of the well-known (ordinary) sunflower conjecture in Z_3^n, and we show via our connection that a construction from [CKSU05] yields a lower bound of (2.51...)^n on the size of the largest {\em multicolored} 3-sunflower-free set, which beats the current best known lower bound of (2.21...)^n [Edel04] on the size of the largest 3-sunflower-free set in Z_3^n.
Author(s): Amnon Ta-Shma (Tel-Aviv University), Christopher Umans (Caltech)
Abstract: We give a new construction of condensers based on Parvaresh-Vardy codes [PV05]. Our condensers have entropy rate (1 - \alpha) for subconstant \alpha (in contrast to [GUV09] which required constant \alpha) and suffer only sublinear entropy loss.Known extractors can be applied to the output to extract all but a subconstant fraction of the minentropy. The resulting (k, \eps) extractor E:{0,1}^n x {0,1}^d -> {0,1}^m has output length m = (1 - \alpha)k with \alpha = 1/polylog(n), and seed length d = O(log n), when \eps> 1/2^{log^\beta n} for any constant \beta< 1. Thus we achieve the same "world-record" extractor parameters as [DKSS09], with a more direct construction.
Author(s): Baris Aydinlioglu (University of Wisconsin-Madison), Dieter van Melkebeek (University of Wisconsin-Madison)
Abstract: Hardness against nondeterministic circuits is known to suffice for derandomizing Arthur-Merlin games. We show a result in the other direction -- that hardness against nondeterministic circuits is *necessary* for derandomizing Arthur-Merlin games. In fact, we obtain an equivalence for a mild notion of derandomization: Arthur-Merlin games can be simulated in Sigma2-SUBEXP (the subexponential version of Sigma2-P) with linear advice on infinitely many input lengths if and only if Sigma2-E (the linear-exponential version of Sigma2-P) requires nondeterministic circuits of superpolynomial size on infinitely many input lengths.Our equivalence result represents a full analogue of a similar result by Impagliazzo et al. in the deterministic setting: randomized polynomial-time decision procedures can be simulated in NSUBEXP (the subexponential version of NP) with linear advice on infinitely many input lengths if and only if NE (the linear-exponential version of NP) requires deterministic circuits of superpolynomial size on infinitely many input lengths.
A key ingredient in our proofs is improved Karp-Lipton style collapse results for nondeterministic circuits. The following are two instantiations that may be of independent interest: Assuming that Arthur-Merlin games can be derandomized in Sigma2-P, we show that (i) if coNP is contained in NP/poly then PH collapses to P^{Sigma2-P}, and (ii) if PSPACE is contained in NP/poly then PSPACE collapses to Sigma2-P.
Author(s): Samuel R. Buss (University of California, San Diego), Ryan Williams (Stanford University)
Abstract: This paper characterizes alternation trading based proofs that the satisfiability problem is not in the time and space bounded class DTISP(n^c,n^\epsilon), for various values c<2 and epsilon<1. We characterize exactly what can be proved for epsilon in o(1) case with currently known methods, and prove the conjecture of Williams that the best known lower bound exponent c=2 cos(pi/7)~1.801 is optimal for alternation trading proofs.For general time-space tradeoff lower bounds on satisfiability, we give a theoretical and computational analysis of the alternation trading proofs for 0
Author(s): Joshua A. Grochow (The University of Chicago)
Abstract: We study the problem of matrix Lie algebra isomorphism. Lie algebras arise centrally in areas as diverse as differential equations, particle physics, group theory, and the Mulmuley--Sohoni Geometric Complexity Theory program. A matrix Lie algebra is a sub-vector space L of matrices such that A,B in L implies AB-BA is in L. Two matrix Lie algebras L and L' are isomorphic if there is an invertible matrix M such that M L M^(-1) = L'. We show that certain cases of matrix Lie algebra isomorphism are equivalent to graph isomorphism. On the other hand, we give polynomial-time algorithms for other cases of matrix Lie algebra isomorphism, which allow us to mostly derandomize a recent result of Kayal on affine equivalence of polynomials. Specifically, we show: - Abelian matrix Lie algebra isomorphism is as hard as graph isomorphism. A matrix Lie algebra is abelian if all of its matrices pairwise commute. - Abelian diagonalizable matrix Lie algebra isomorphism of n x n matrices can be solved in poly(n) time when the Lie algebras have constant dimension. The dimension of a Lie algebra is the maximum number of linearly independent matrices it contains. A matrix Lie algebra is diagonalizable if it is isomorphic to a set of diagonal matrices. - Semisimple matrix Lie algebra isomorphism is equivalent to graph isomorphism. A Lie algebra is semisimple if it is a direct sum of simple Lie algebras. - Semisimple matrix Lie algebra isomorphism can be solved in polynomial time when the Lie algebras consist of only O(log n) direct summands. - Isomorphism of completely reducible matrix Lie algebras---that is, a direct sum of a diagonalizable and a semisimple Lie algebra---can be solved in polynomial time when the diagonalizable part is constant-dimensional and the semisimple part has O(log n) simple direct summands.
Author(s): Marek Cygan (University of Lugano), Holger Dell (University of Wisconsin-Madison), Daniel Lokshtanov (University of California, San Diego), Daniel Marx (Hungarian Academy of Sciences), Jesper Nederlof (Utrecht University), Yoshio Okamoto, (Japan Advanced Institute of Science and Technology), Ramamohan Paturi (University of California, San Diego), Saket Saurabh (Institute of Mathematical Sciences, India), Magnus Wahlstrom (Max-Planck-Institut fur Informatik)
Abstract: Exact exponential time algorithms for NP-hard problems have thrived over the last decade. Nontrivial exponential time algorithms have been found for a myriad of problems, including GRAPH COLORING, HAMILTONIAN PATH, DOMINATING SET and 3–CNF-SAT, that is, satis?ability of 3-CNF formulas. For some basic problems, however, there has been no progress over their trivial solution. For others, non-trivial solutions have been found, but improving these algorithms further seems to be out of reach. The CNF-SAT problem is the canonical example of a problem for which the brute force algorithm runs in time O(2^n). While there exist non-trivial algorithms for CNF-SAT that run in time o(2^n), no algorithm was able to improve the constant 2 to a smaller constant, and hence it is natural to conjecture that 2 is the best constant. The strong exponential time hypothesis (SETH) by Impagliazzo and Paturi goes a little bit further and asserts that, for each eps < 1, there is a (large) integer k such that k–CNF-SAT cannot be computed in time O(2^(eps * n)). In this paper we reveal connections between well-studied problems, and show that improving over the currently best known algorithms for several of them would violate SETH.Speci?cally, we show that, for every eps < 1, an O(2^(eps*n)) time algorithm for HITTING SET, SET SPLITTING, or NAE-SAT would violate SETH. Here n is the number of elements (or variables) in the input set system (or formula). We conjecture that an O(2 ^(eps * n)) time algorithm for SET COVER would violate SETH as well, and prove under this assumption, that improving the fastest known algorithms for STEINER TREE, CONNECTED VERTEX COVER, SET PARTITIONING or the pseudo-polynomial time algorithm for SUBSET SUM would violate SETH. Finally we justify our assumption about the hardness of SET COVER. In particular we prove that, assuming SETH there does not exist an O(2^(eps* n)) time algorithm that counts the number of set covers in a set system modulo two.
Author(s): Dmitry Gavinsky (NEC Laboratories America, Inc.)
Abstract: We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries – this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.
Kazuhisa Seto and Suguru Tamaki (Kyoto University)
Abstract: We present a moderately exponential time algorithm for the satisfiability of Boolean formulas over the full binary basis. For formulas of size at most $cn$, our algorithm runs in time $2^{(1-\mu_c)n}$ for some constant $\mu_c>0$. As a byproduct of the running time analysis of our algorithm, we get strong average-case hardness of affine extractors for linear-sized formulas over the full binary basis.
Author(s): Andrew Drucker (Massachusetts Institute of Technology)
Abstract: We study the circuit complexity of Boolean operators, i.e., collections of Boolean functions defined over a common input. Our focus is the well-studied model in which arbitrary Boolean functions are allowed as gates, and in which a circuit’s complexity is measured by its depth and number of wires. We show sharp limitations of several existing lower-bound methods for this model.First, we study an information-theoretic lower-bound method due to Cherukhin, that yields bounds of form Omega_d(n • lambda_{d-1}(n)) on the number of wires needed to compute cyclic convolutions in depth d >= 2. This was the first improvement over the lower bounds provided by the well-known superconcentrator technique (for d = 2, 3 and for even d >= 4). Cherukhin’s method was formalized by Jukna as a general lower-bound criterion for Boolean operators, the “Strong Multiscale Entropy” (SME) property. It seemed plausible that this property could imply significantly better lower bounds by an improved analysis. However, we show that this is not the case, by exhibiting an explicit operator with the SME property that is computable in depth d with O(n • lambda_{d-1}(n)) wires, for d = 2, 3 and for even d >= 6.
Next, we show limitations of two simpler lower-bound criteria given by Jukna: the “entropy method” for general operators, and the “pairwise-distance method” for linear operators. We show that neither method gives super-linear lower bounds for depth 3. In the process, we obtain the first known polynomial separation between the depth-2 and depth-3 wire complexities for an explicit operator. We also continue the study (initiated by Jukna) of the complexity of “representing” a linear operator by bounded-depth circuits, a weaker notion than computing the operator.
Author(s): Amos Beimel (Ben-Gurion University), Yuval Ishai (Technion), Eyal Kushilevitz (Technion), Ilan Orlov (Ben-Gurion University)
Abstract: An information-theoretic private information retrieval (PIR) protocol allows a client to retrieve the $i$-th bit of a database, held by two or more servers, without revealing information about $i$ to any individual server. Information-theoretic PIR protocols are closely related to locally decodable codes (LDCs), which are error correcting codes that can simultaneously offer a high level of resilience and sublinear-time decoding of each bit of the encoded message.Recent breakthrough results of Yekhanin (STOC 2007) and Efremenko (STOC 2009) have led to a dramatic improvement in the asymptotic complexity of PIR and LDC. We suggest a new "cryptographic" perspective on these recent constructions, which is based on a general notion of share conversion in secret-sharing schemes that may be of independent interest.
Our new perspective gives rise to a clean framework which unifies previous constructions and generalizes them in several directions. In a nutshell, our general framework for PIR uses the following two-step approach: (1) apply share conversion to get a low-communication secure multiparty computation protocol $P$ for a nontrivial class $F$ of low-depth circuits; (2) use a lower bound on the VC dimension of $F$ to get a good PIR protocol from $P$. Our framework reduces the task of designing good PIR protocols to that of finding powerful forms of share conversion which support circuit classes of a high VC dimension.
Motivated by this framework, we study the general power of share conversion and obtain both positive and negative results. Our positive results improve the concrete complexity of PIR even for very feasible real-life parameters. They also lead to some improvements in the asymptotic complexity of the best previous PIR and LDC constructions. For 3-server PIR, we improve the asymptotic communication complexity from $O(2^{146\sqrt{\log n\log\log n}})$ to $O(2^{6\sqrt{\log n\log\log n}})$ bits, where $n$ is the database size. Our negative results on share conversion establish some limitations on the power of our approach.
Author(s): Derrick Stolee and N. V. Vinodchandran (University of Nebraska--Lincoln)
Abstract: This work presents a log-space reduction which compresses an $n$-vertex directed acyclic graph with $m(n)$ sources embedded on a surface of genus $g(n)$, to a graph with $O(m(n)+g(n))$ vertices while preserving reachability between a given pair of vertices. Applying existing algorithms to this reduced graph yields new {\em deterministic} algorithms with improved space bounds as well as improved simultaneous time-space bounds for the reachability problem over a large class of directed acyclic graphs. Specifically, it significantly extends the class of surface-embedded graphs with log-space reachability algorithms: from planar graphs with $O(\log n)$ sources, to graphs with $2^{O(\sqrt{\log n})}$ sources embedded on a surface of genus $2^{O(\sqrt{\log n})}$. Additionally, it yields an $O(n^{1-\epsilon})$ space algorithm with polynomial running time for reachability over graphs with $O(n^{1-\epsilon})$ sources embedded on surfaces of genus $O(n^{1-\epsilon})$.
Author(s): Prasad Raghavendra (Georgia Institute of Technology), David Steurer (Microsoft Research New England), Madhur Tulsiani (Toyota Technological Institute)
Abstract: The Small-Set Expansion Hypothesis (Raghavendra, Steurer, STOC 2010) is a natural hardness assumption concerning the problem of approximating the edge expansion of small sets in graphs. This hardness assumption is closely connected to the Unique Games Conjecture (Khot, STOC 2002). In particular, the Small-Set Expansion Hypothesis implies the Unique Games Conjecture (Raghavendra, Steurer, STOC 2010).Our main result is that the Small-Set Expansion Hypothesis is in fact equivalent to a variant of the Unique Games Conjecture. More precisely, the hypothesis is equivalent to the Unique Games Conjecture restricted to instance with a fairly mild condition on the expansion of small sets. Alongside, we obtain the first strong hardness of approximation results for the Balanced Separator and Minimum Linear Arrangement problems. Before, no such hardness was known for these problems even assuming the Unique Games Conjecture.
These results not only establish the Small-Set Expansion Hypothesis as a natural unifying hypothesis that implies the Unique Games Conjecture, all its consequences and, in addition, hardness results for other problems like Balanced Separator and Minimum Linear Arrangement, but our results also show that the Small-Set Expansion Hypothesis problem lies at the combinatorial heart of the Unique Games Conjecture.
The key technical ingredient is a new way of exploiting the structure of the Unique Games instances obtained from the Small-Set Expansion Hypothesis via (Raghavendra, Steurer, 2010). This additional structure allows us to modify standard reductions in a way that essentially destroys their local-gadget nature. Using this modification, we can argue about the expansion in the graphs produced by the reduction without relying on expansion properties of the underlying Unique Games instance (which would be impossible for a local-gadget reduction).
Author(s): Per Austrin (Toronto), Johan Håstad (KTH)
Abstract: Motivated by the pervasiveness of strong inapproximability results for Max-CSPs, we introduce a relaxed notion of an approximate solution of a Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly real-valued) constraints, and then only needs to satisfy as many of the new constraints as possible.To be more precise, we introduce the following notion of a predicate $P$ being \emph{useful} for a (real-valued) objective $Q$: given an almost satisfiable Max-$P$ instance, there is an algorithm that beats a random assignment on the corresponding Max-$Q$ instance applied to the same sets of literals. The standard notion of a nontrivial approximation algorithm for a Max-CSP with predicate $P$ is exactly the same as saying that $P$ is useful for $P$ itself.
We say that $P$ is useless if it is not useful for any $Q$. Under the Unique Games Conjecture, we can give a complete and simple characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is useful if and only if there is a pairwise independent distribution supported on the satisfying assignments of the predicate. It is natural to also consider the case when no negations are allowed in the CSP instance, and we derive a similar complete characterization (under the UGC) there as well.
Finally, we also include some results and examples shedding additional light on the approximability of certain Max-CSPs.
Author(s): Dmitry Gavinsky (NEC Laboratories America, Inc.), Shachar Lovett (Institute of Advanced Study), Srikanth Srinivasan (DIMACS, Rutgers University)
Abstract: We consider the problem of constructing pseudorandom generators for read-once circuits. We give an explicit construction of a pseudorandom generator for the class of read-once $ACC^0$ circuits: constant depth circuits with unbounded fan-in AND, OR, NOT and generalized $MOD_m$ gates, where m is an arbitrary ?xed constant. The seed length of our generator is poly-logarithmic in the number of variables and the error.
Author(s): Paul Beame (University of Washington), Russell Impagliazzo (Institute for Advanced Study and University of California, San Diego), Srikanth Srinivasan (DIMACS, Rutgers University)
Abstract: We show how to approximate any function in AC^0 by decision trees of much smaller height than its number of variables. More precisely, we show that any function in n variables computable by an unbounded fan-in circuit of AND, OR, and NOT gates that has size S and depth d can be approximated by a decision tree of height n-\beta n to within error $\exp(-\beta n)$, where $\beta = \beta(S,d) = 2^{-O(d\log^{4/5}S)}$. Our proof is constructive and we use its constructivity to derive a \emph{deterministic} algorithm for #AC^0SAT with multiplicative factor savings over the naive $2^n S$ algorithm of $2^{-\Omega(\beta n)}$, when applied to any n-input AC^0 circuit of size S and depth d. Indeed, in the same running time we can deterministically construct a decision tree of size at most $2^{n-\beta n}$ that exactly computes the function given by such a circuit. Recently, Impagliazzo, Matthews, and Paturi derived an algorithm for #AC^0SAT with greater savings over the naive algorithm but their algorithm is only randomized rather than deterministic.The main technical result we prove to show the above is that for every family F of k-DNF formulas in n variables and every $1
Author(s): Anil Ada (McGill University), Arkadev Chattopadhyay (University of Toronto), Stephen Cook (University of Toronto), Michal Koucký (Institute of Mathematics and Aarhus University), Lila Fontes (University of Toronto), Toniann Pitassi (University of Toronto).
Abstract: In 1989 Kushilevitz [Kus89] initiated the study of information-theoretic privacy within the context of communication complexity. Unfortunately, it has been shown that most interesting functions are not privately computable [Kus89, BS08]. The unattainability of perfect privacy for many functions motivated the study of approximate privacy. In [FJS10a, FJS10b], they define notions of worst-case as well as average-case approximate privacy, and present several interesting upper bounds, and some open problems for further study. In this paper, we obtain asymptotically tight bounds on the trade-offs between both the worst-case and average-case approximate privacy of protocols and their communication cost for Vickrey-auctions.Further, we relate the notion of average-case approximate privacy to other measures based on information cost of protocols. This enables us to prove exponential lower bounds on the subjective approximate privacy of protocols for computing the Intersection function, independent of its communication cost. This proves a conjecture of Feigenbaum et al.
Author(s): Parikshit Gopalan (Microsoft Research, SVC), Raghu Meka (Institute for Advanced Study, Princeton), Omer Reingold (Microsoft Research, SVC).
Abstract: We give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF or CNF. Given a DNF (or CNF) f on n variables we give a deterministic n^{\tilde{O}((log log n)^2)} time algorithm that computes an (additive) epsilon approximation to the fraction of satisfying assignments of f for epsilon = 1/poly(log n). The previous best algorithm due to Luby and Velickovic from nearly two decades ago had a run-time of n^{exp(O(sqrt{log log n}))}.A crucial ingredient in our algorithm is a structural result which allows us to sparsify any small-width DNF formula. It says that any width w DNF (irrespective of the number of terms) can be epsilon-approximated by a width w DNF with at most (w log(1/epsilon))^{O(w)} terms. Further, our approximating DNFs have an additional ``sandwiching'' property which is crucial for applications to derandomization.
Author(s): Guy Kindler (Hebrew University of Jerusalem), Ryan O'Donnell (Carnegie Mellon University)
Abstract: We observe a subadditivity property for the noise sensitivity of subsets of Gaussian space. For subsets of volume 1/2, this leads to an almost trivial proof of Borell’s Isoperimetric Inequality for rho = cos(pi/2L), L a natural number. In turn this can be used to obtain the Gaussian Isoperimetric Inequality for volume-1/2 sets and also .8787-factor UG-hardness for Max-Cut (within 10^{-4} of the optimum). As another corollary we show the Hermite tail bound ||f^{>k}||_2^2 >= Omega(Var[f])/sqrt(k) for f : R^n -> {-1,1}. Combining this with the Invariance Principle shows the same Fourier tail bound for any Boolean f : {-1,1}^n -> {-1,1} with all its noisy-in?uences small. This improves on a result of Bourgain in the Boolean setting, which only had 1/k^{1/2+o(1)}. Without using Invariance, we also show how to simplify and improve Bourgain’s proof to obtain the bound 1/sqrt(k)log^{1.5}(k).
Author(s): Sourav Chakraborty (Chennai Mathematical Institute), Eldar Fischer (Technion), David García-Soriano (CWI, Amsterdam), Arie Matsliah (IBM Research, Haifa)
Abstract: We make a step towards characterizing the boolean functions to which isomorphism can be efficiently tested. Specifically, we prove that isomorphism to any boolean function on ${0,1}^n$ with a polynomial number of distinct permutations can be tested with a number of queries that is independent of $n$. We also show some partial results in the converse direction, and discuss related problems: testing isomorphism up to linear transformations, and testing isomorphism against a uniform (hyper)graph that is given in advance. Our results regarding the latter topic generalizes a theorem of Fischer (SICOMP 2005), and in the process we also provide a simpler proof of his original result which avoids the use of Szemeredi's regularity lemma.
Author(s): Guy Moshkovitz (Tel Aviv University)
Abstract: In this paper we present a combinatorial approach for proving complexity lower bounds; we mainly focus on the following instantiation of it. Consider a pair of properties of $m$-edge regular hypergraphs. Suppose they are "indistinguishable" with respect to hypergraphs with $m-t$ edges, in the sense that every such hypergraph has the same number of super-hypergraphs satisfying each property. Roughly speaking, we show that finding a pair of distinct such properties implies an $m/(t-1)$ lower bound on the rank of explicit tensors. In particular, showing the above for $3$-hypergraphs implies a lower bound of $\Omega(m/(t-1))$ for general arithmetic circuits.We also show, albeit non-explicitly, that near-optimal rank lower bounds can be obtained in this manner. Furthermore, we consider the $t=2$ case and prove that it already implies non-trivial lower bounds. In particular, we derive a (tight) lower bound of $3n/2$ on the rank of $n\times n\times n$ tensors naturally associated with hypergraph trees (which apparently was not known before; in fact, our bound also applies to the so-called border rank, and as such, essentially matches the best lower bounds known).
Author(s): Richard J. Lipton (Georgia Institute of Technology), Ryan Williams (Stanford University)
Abstract: We give a self-reduction for the Circuit Evaluation problem, and prove the following consequences.-- Amplifying Size-Depth Lower Bounds. If CircEval is in SIZEDEPTH[n^k,n^{1-\delta}] for some k and delta, then for every eps > 0, there is a delta' > 0 such that CircEval is in SIZEDEPTH[n^{1+\eps},n^{1-\delta'}]. Moreover, the resulting circuits require only O(n^{eps}) bits of non-uniformity. As a consequence, strong enough depth lower bounds for Circuit Evaluation imply a full separation of P and NC (even with a weak size lower bound).
-- Uniform Lower Bounds for QBF. Let c, d > 1 and e < 1 satisfy c < (1-e + d)/d. Either the quantified Boolean formula problem (QBF) is not solvable in n^c time, or the Circuit Evaluation problem cannot be solved with circuits of n^d size and n^e depth. This implies unconditional polynomial-time uniform NC lower bounds for solving QBF.
Author(s): Yuval Filmus (University of Toronto), Massimo Lauria (Sapienza - University of Rome), Jakob Nordström (KTH Royal Institute of Technology), Neil Thapen (Institute of Mathematics, AS CR), Noga Zewi (Technion -- Israel Institute of Technology)
Abstract: During the last decade, an active line of research in proof complexity has been to study space complexity and time-space trade-offs for proofs. Besides being a natural complexity measure of intrinsic interest, space is also an important issue in SAT solving, and so research has mostly focused on weak systems that are used by SAT solvers.There has been a relatively long sequence of papers on space in resolution and resolution-based proof systems, and it is probably fair to say that resolution is reasonably well understood from this point of view. For other natural candidates to study, however, such as polynomial calculus or cutting planes, very little has been known. We are not aware of any nontrivial space lower bounds for cutting planes, and for polynomial calculus the only lower bound has been for CNF formulas of unbounded width in [Alekhnovich et al.'02], where the space lower bound is smaller than the initial width of the clauses in the formulas. Thus, in particular, it has been consistent with current knowledge that polynomial calculus could be able to refute any k-CNF formula in constant space.
In this paper, we prove several new results on space in polynomial calculus (PC), and in the extended proof system polynomial calculus resolution (PCR) studied in [Alekhnovich et al.'02]:
1. We prove an O(n) space lower bound in PC for the canonical 3-CNF version of the pigeonhole principle formulas PHP(m,n) with m pigeons and n holes, and show that this is tight.
2. For PCR, we prove an O(n) space lower bound for a bitwise encoding of the functional pigeonhole principle with m pigeons and n holes. These formulas have width O(log(n)), and so this is an exponential improvement over [Alekhnovich et al.'02] measured in the width of the formulas.
3. We then present another encoding of a version of the pigeonhole principle that has constant width, and prove an O(n) space lower bound in PCR for these formulas as well.
4. Finally, we prove that any k-CNF formula can be refuted in PC in simultaneous exponential size and linear space (which holds for resolution and thus for PCR, but was not obviously the case for PC). We also characterize a natural class of CNF formulas for which the space complexity in resolution and PCR does not change when the formula is transformed into a 3-CNF in the canonical way, something that we believe can be useful when proving PCR space lower bounds for other well-studied formula families in proof complexity.