Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the complexity of representing polynomials by arithmetic circuits in both the commutative and the non-commutative settings. Our approach goes through a precise understanding of the more restricted setting where multiplication is not associative, meaning that we distinguish (xy)z from x(yz). Our first and main conceptual result is a characterization result: We show that the size of the smallest circuit computing a given non-associative polynomial is exactly the rank of a matrix constructed from the polynomial and called the Hankel matrix. This result applies to the class of all circuits in both commutative and non-commutative settings, and can be seen as an extension of the seminal result of Nisan giving a similar characterization for non-commutative algebraic branching programs. The study of the Hankel matrix provides a unifying approach for proving lower bounds for polynomials in the (classical) associative setting. Our key technical contribution is to provide generic lower bound theorems based on analyzing and decomposing the Hankel matrix. We obtain significant improvements on lower bounds for circuits with many parse trees, in both (associative) commutative and non-commutative settings, as well as alternative proofs of recent results proving superpolynomial and exponential lower bounds for different classes of circuits as corollaries of our characterization and decomposition results. PubDate: 2021-10-08 DOI: 10.1007/s00037-021-00214-1

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper is motivated by seeking the exact complexity of resolution refutation of Tseitin formulas. We prove that the size of any regular resolution refutation of a Tseitin formula \( {\rm T}(G, c)\) based on a connected graph \({G} =(V, E)\) is at least \(2^{\Omega({\rm tw}(G)/ \log V )}\) , where \({\rm tw}(G)\) denotes the treewidth of a graph G. For constant-degree graphs, there is a known upper bound \(2^{\mathcal{O}({\rm tw}(G))}{\rm poly}( V )\) (Alekhnovich & Razborov Comput. Compl. 2011; Galesi, Talebanfard & Torán ACM Trans. Comput. Theory 2020), so our lower bound is tight up to a logarithmic factor in the exponent. Our proof consists of two steps. First, we show that any regular resolution refutation of an unsatisfiable Tseitin formula \({\rm T}(G, c) \) of size S can be converted to a read-once branching program computing a satisfiable Tseitin formula \({\rm T}(G,c')\) of size \(S^{{\mathcal{O}}({\rm log} V )}\) and this bound is tight. Second, we give the exact characterization of the nondeterministic read-once branching program (1-NBP) complexity of satisfiable Tseitin formulas in terms of structural properties of underlying graphs. Namely, we introduce a new graph measure, the component width (compw) and show that the size of a minimal \({1\text{-}\mathrm{NBP}}\) computing a satisfiable Tseitin formula \({\rm T}(G,c')\) based on a graph \({G} = (V, E)\) equals \(2^{compw}(G)\) up to a polynomial factor. Then we show that \(\Omega({\rm tw}(G)) \le {\rm compw}(G) \le {\mathcal{O}}({\rm tw}(G){\rm log}( V ))\) and both of these bounds are tight. The lower bound improves the recent result by Glinskih & Itsykson (Theory Comput. Syst. 2021). PubDate: 2021-08-27 DOI: 10.1007/s00037-021-00213-2

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Interactive proofs of proximity allow a sublinear-time verifier to check that a given input is close to the language, using a small amount of communication with a powerful (but untrusted) prover. In this work, we consider two natural minimally interactive variants of such proofs systems, in which the prover only sends a single message, referred to as the proof. The first variant, known as MA-proofs of Proximity (MAP), is fully non-interactive, meaning that the proof is a function of the input only. The second variant, known as AM-proofs of Proximity (AMP), allows the proof to additionally depend on the verifier's (entire) random string. The complexity of both MAPs and AMPs is the total number of bits that the verifier observes—namely, the sum of the proof length and query complexity. Our main result is an exponential separation between the power of MAPs and AMPs. Specifically, we exhibit an explicit and natural property \(\Pi\) that admits an AMP with complexity \(O(\log n)\) , whereas any MAP for \(\Pi\) has complexity \(\tilde{\Omega}(n^{1/4})\) , where n denotes the length of the input in bits. Our MAP lower bound also yields an alternate proof, which is more general and arguably much simpler, for a recent result of Fischer et al. (ITCS, 2014). Also, Aaronson (Quantum Information & Computation 2012) has shown a \(\Omega(n^{1/6})\) QMA lower bound for the same property \(\Pi\) . Lastly, we also consider the notion of oblivious proofs of proximity, in which the verifier's queries are oblivious to the proof. In this setting, we show that AMPs can only be quadratically stronger than MAPs. As an application of this result, we show an exponential separation between the power of public and private coin for oblivious interactive proofs of proximity. PubDate: 2021-08-18 DOI: 10.1007/s00037-021-00212-3

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the approximation of halfspaces \(h:\{0,1\}^n\to\{0,1\}\) in the infinity norm by polynomials and rational functions of any given degree. Our main result is an explicit construction of the “hardest” halfspace, for which we prove polynomial and rational approximation lower bounds that match the trivial upper bounds achievable for all halfspaces. This completes a lengthy line of work started by Myhill and Kautz (1961). As an application, we construct a communication problem that achieves essentially the largest possible separation, of O(n) versus \(2^{-\Omega(n)}\) , between the sign-rank and discrepancy. Equivalently, our problem exhibits a gap of log n versus \(\Omega(n)\) between the communication complexity with unbounded versus weakly unbounded error, improving quadratically on previous constructions and completing a line of work started by Babai, Frankl, and Simon (FOCS 1986). Our results further generalize to the k-party number-on-the-forehead model, where we obtain an explicit separation of log n versus \(\Omega(n/4^{n})\) for communication with unbounded versus weakly unbounded error. PubDate: 2021-08-03 DOI: 10.1007/s00037-021-00211-4

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We investigate the power of randomness in two-party communication complexity. In particular, we study the model where the parties can make a constant number of queries to a function that has an efficient one-sided-error randomized protocol. The complexity classes defined by this model comprise the Randomized Boolean Hierarchy, which is analogous to the Boolean Hierarchy but defined with one-sidederror randomness instead of nondeterminism. Our techniques connect the Nondeterministic and Randomized Boolean Hierarchies, and we provide a complete picture of the relationships among complexity classes within and across these two hierarchies. In particular, we prove that the Randomized Boolean Hierarchy does not collapse, and we prove a query-to-communication lifting theorem for all levels of the Nondeterministic Boolean Hierarchy and use it to resolve an open problem stated in the paper by Halstenberg and Reischuk (CCC 1988) which initiated the study of this hierarchy. PubDate: 2021-07-02 DOI: 10.1007/s00037-021-00210-5

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Authors would like to correct the error in their publication. PubDate: 2021-06-10 DOI: 10.1007/s00037-021-00208-z

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We look at the problem of blackbox polynomial identity testing (PIT) for the model of read-once oblivious algebraic branching programs (ROABP), where the number of variables is logarithmic to the input size of ROABP. We restrict width of ROABP to a constant and study the more general sum-of-ROABPs model. This model is nontrivial due to the arbitrary individual-degree. We give the first poly( \(s\) )-time blackbox PIT for sum of constant-many, size- \(s\) , \(O(log s)\) -variate constant-width ROABPs. The previous best for this model was quasi-polynomial time (Gurjar et al, CCC'15; Computational Complexity'17) which is comparable to brute-force in the log-variate setting. We also show that we can work with unbounded-many such ROABPs if each ROABP computes a homogeneous polynomial (or more generally for degree-preserving sums). We also give poly-time PIT for the border. We introduce two new techniques, both of which also work for the border version of the stated models. (1) The leading-degree-part of an ROABP can be made syntactically homogeneous in the same width. (2) There is a direct reduction from PIT of sum-of-ROABPs to PIT of single ROABP (over any field). Our methods improve the time complexity for PIT of sum-of-ROABPs in the log-variate regime. PubDate: 2021-06-10 DOI: 10.1007/s00037-021-00209-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We show a new connection between the clause space measure in tree-like resolution and the reversible pebble game on graphs. Using this connection, we provide several formula classes for which there is a logarithmic factor separation between the clause space complexity measure in tree-like and general resolution. We also provide upper bounds for tree-like resolution clause space in terms of general resolution clause and variable space. In particular, we show that for any formula F, its tree-like resolution clause space is upper bounded by space \((\pi)\) \((\log({\rm time}(\pi))\) , where \(\pi\) is any general resolution refutation of F. This holds considering as space \((\pi)\) the clause space of the refutation as well as considering its variable space. For the concrete case of Tseitin formulas, we are able to improve this bound to the optimal bound space \((\pi)\log n\) , where n is the number of vertices of the corresponding graph PubDate: 2021-05-01 DOI: 10.1007/s00037-021-00206-1

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We study the problem of constructing explicit families of matrices which cannot be expressed as a product of a few sparse matrices. In addition to being a natural mathematical question on its own, this problem appears in various incarnations in computer science; the most significant being in the context of lower bounds for algebraic circuits which compute linear transformations, matrix rigidity and data structure lower bounds. We first show, for every constant d, a deterministic construction in time \({\rm exp}(n^{1-\Omega(1/d)})\) of a family \(\{M_n\}\) of \(n \times n\) matrices which cannot be expressed as a product \(M_n = A_1 \cdots A_d\) where the total sparsity of \(A_1,\ldots,A_d\) is less than \(n^{1+1/(2d)}\) . In other words, any depth-d linear circuit computing the linear transformation \(M_n\cdot {\bf x}\) has size at least \(n^{1+\Omega(1/d)}\) . The prior best lower bounds for this problem were barely super-linear, and were obtained by a long line of research based on the study of super-concentrators. We improve these lower bounds at the cost of a blow up in the time required to construct these matrices. Previously, however, such constructions were not known even in time \(2^{O(n)}\) with the aid of an NP oracle. We then outline an approach for proving improved lower bounds through a certain derandomization problem, and use this approach to prove asymptotically optimal quadratic lower bounds for natural special cases, which generalize many of the common matrix decompositions. PubDate: 2021-04-02 DOI: 10.1007/s00037-021-00205-2

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract For any finite Galois field extension K/F, with Galois group G = Gal (K/F), there exists an element \(\alpha \in \) K whose orbit \(G\cdot\alpha\) forms an F-basis of K. Such an \(\alpha\) is called a normal element, and \(G\cdot\alpha\) is a normal basis. We introduce a probabilistic algorithm for testing whether a given \(\alpha \in\) K is normal, when G is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether \(\alpha\) is normal can be reduced to deciding whether \(\sum_{g \in G} g(\alpha)g \in\) K[G] is invertible; it requires a slightly subquadratic number of operations. Once we know that \(\alpha\) is normal, we show how to perform conversions between the power basis of K/F and the normal basis with the same asymptotic cost. PubDate: 2021-03-02 DOI: 10.1007/s00037-020-00204-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t + 1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system. PubDate: 2021-02-12 DOI: 10.1007/s00037-020-00201-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A stochastic code is a pair of encoding and decoding procedures (Enc, Dec) where \({{\rm Enc} : \{0, 1\}^{k} \times \{0, 1\}^{d} \rightarrow \{0, 1\}^{n}}\) . The code is (p, L)-list decodable against a class \(\mathcal{C}\) of “channel functions” \(C : \{0,1\}^{n} \rightarrow \{0,1\}^{n}\) if for every message \(m \in \{0,1\}^{k}\) and every channel \(C \in \mathcal{C}\) that induces at most pn errors, applying Dec on the “received word” C(Enc(m,S)) produces a list of at most L messages that contain m with high probability over the choice of uniform \(S \leftarrow \{0, 1\}^{d}\) . Note that both the channel C and the decoding algorithm Dec do not receive the random variable S, when attempting to decode. The rate of a code is \(R = k/n\) , and a code is explicit if Enc, Dec run in time poly(n). Guruswami and Smith (Journal of the ACM, 2016) showed that for every constants \(0 < p < \frac{1}{2}, \epsilon > 0\) and \(c > 1\) there exist a constant L and a Monte Carlo explicit constructions of stochastic codes with rate \(R \geq 1-H(p) - \epsilon\) that are (p, L)-list decodable for size \(n^c\) channels. Here, Monte Carlo means that the encoding and decoding need to share a public uniformly chosen \({\rm poly}(n^c)\) bit string Y, and the constructed stochastic code is (p, L)-list decodable with high probability over the choice of Y. Guruswami and Smith pose an open problem to give fully explicit (that is not Monte Carlo) explicit codes with the same parameters, under hardness assumptions. In this paper, we resolve this open problem, using a minimal assumption: the existence of poly-time computable pseudorandom generators for small circuits, which follows from standard complexity assumptions by Impagliazzo and Wigderson (STOC 97). Guruswami and Smith also asked to give a fully explicit unconditional constructions with the same parameters against \(O(\log n)\) -space online channels. (These are channels that have space \(O(\log n)\) and are allowed to read the input codeword in one pass.) We also resolve this open problem. Finally, we consider a tighter notion of explicitness, in which the running time of encoding and list-decoding algorithms does not increase, when increasing the complexity of the channel. We give explicit constructions (with rate approaching \(1 - H(p)\) for every \(p \leq p_{0}\) for some \(p_{0} >0\) ) for channels that are circuits of size \(2^{n^{\Omega(1/d)}}\) and depth d. Here, the running time of encoding and decoding is a polynomial that does not depend on the dept... PubDate: 2021-01-20 DOI: 10.1007/s00037-020-00203-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Resolution over linear equations is a natural extension of the popular resolution refutation system, augmented with the ability to carry out basic counting. Denoted \({\rm Res}({\rm lin}_R)\) , this refutation system operates with disjunctions of linear equations with Boolean variables over a ring R, to refute unsatisfiable sets of such disjunctions. Beginning in the work of Raz & Tzameret (2008), through the work of Itsykson & Sokolov (2020) which focused on tree-like lower bounds, this refutation system was shown to be fairly strong. Subsequent work (cf. Garlik & Kołodziejczyk 2018; Itsykson & Sokolov 2020; Krajícek 2017; Krajícek & Oliveira 2018) made it evident that establishing lower bounds against general \({\rm Res}({\rm lin}_R)\) refutations is a challenging and interesting task since the system captures a ``minimal'' extension of resolution with counting gates for which no super-polynomial lower bounds are known to date. We provide the first super-polynomial size lower bounds against general (dag-like) resolution over linear equations refutations in the large characteristic regime. In particular, we prove that the subset-sum principle \(1+\sum\nolimits_{i=1}^{n}2^i x_i = 0\) requires refutations of exponential size over \(\mathbb{Q}\) . We use a novel lower bound technique: We show that under certain conditions every refutation of a subset-sum instance \(f=0\) must pass through a fat clause consisting of the equation \(f=\alpha\) for every \(\alpha\) in the image of f under Boolean assignments, or can be efficiently reduced to a proof containing such a clause. We then modify this approach to prove exponential lower bounds against tree-like refutations of any subset-sum instance that depends on n variables, hence also separating tree-like from dag-like refutations over the rationals. We then turn to the finite fields regime, showing that the work of Itsykson & Sokolov (2020), where tree-like lower bounds over \(\mathbb{F}_2\) were obtained, can be carried over and extended to every finite field. We establish new lower bounds and separations as follows: (i) For every pair of distinct primes \(p,q\) , there exist CNF formulas with short tree-like refutations in \({\rm Res}({\rm lin}{\mathbb{F}_p})\) that require exponential-size tree-like \({\rm Res}({\rm lin}{\mathbb{F}_q})\) refutations; (ii) random k-CNF formulas require exponential-size tree-like \({\rm Res}({\rm lin}{\mathbb{F}_p})\) refutations, for every prime p and constant k; and (iii) exponential-size lower bounds for tree-like \({\rm Res}({\rm lin}{\mathbb{F}})\) refutations of the pigeonhole principle, for every field \(\mathbb{F}\) . PubDate: 2021-01-08 DOI: 10.1007/s00037-020-00202-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: \(\circ \quad\) A PCP is strong if it rejects an alleged proof of a correct claimwith probability proportional to its distance from some correctproof of that claim. \(\circ \quad\) A PCP is smooth if each location in a proof is queried with equalprobability. We prove that all sets in \(\mathcal{NP}\) have PCPs that are both smooth andstrong, are of polynomial length and can be verified based on a constantnumber of queries. This is achieved by following the proof of thePCP theorem of Arora et al. (JACM 45(3):501–555, 1998), providing astronger analysis of the Hadamard and Reed–Muller based PCPs anda refined PCP composition theorem. In fact, we show that any set in \(\mathcal{NP}\) has a smooth strong canonical PCP of Proximity (PCPP), meaningthat there is an efficiently computable bijection of \(\mathcal{NP}\) witnesses to correct proofs. This improves on the recent construction of Dinur et al. (in: Blum (ed) 10th innovations in theoretical computer science conference, ITCS, San Diego, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of “stable” 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (in: Chan (ed) Proceedings of the 30th annual ACM-SIAM symposium on discrete algorithms, SODA, San Diego, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems. PubDate: 2021-01-06 DOI: 10.1007/s00037-020-00199-3

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Given two matroids on the same ground set, the matroid intersection problem asks to find a common independent set of maximum size. In case of linear matroids, the problem had a randomized parallel algorithm but no deterministic one. We give an almost complete derandomization of this algorithm, which implies that the linear matroid intersection problem is in quasi-NC. That is, it has uniform circuits of quasi-polynomial size \(n^{O(\log n)}\) and O(polylog(n)) depth. Moreover, the depth of the circuit can be reduced to O(log2 n) in case of zero characteristic fields. This generalizes a similar result for the bipartite perfect matching problem. Our main technical contribution is to derandomize the Isolation lemma for the family of common bases of two matroids. We use our isolation result to give a quasi-polynomial time blackbox algorithm for a special case of Edmonds' problem, i.e., singularity testing of a symbolic matrix, when the given matrix is of the form \(A_{0} + A_{1 }x_{1} + \cdots + A_{m} x_{m}\) , for an arbitrary matrix A0 and rank-1 matrices \(A_{1}, A_{2}, \dots, A_{m}\) . This can also be viewed as a blackbox polynomial identity testing algorithm for the corresponding determinant polynomial. Another consequence of this result is a deterministic solution to the maximum rank matrix completion problem. Finally, we use our result to find a deterministic representation for the union of linear matroids in quasi-NC. PubDate: 2020-11-19 DOI: 10.1007/s00037-020-00200-z

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In two papers, Bürgisser and Ikenmeyer (STOC 2011, STOC 2013) used an adaption of the geometric complexity theory (GCT) approach by Mulmuley and Sohoni (Siam J Comput 2001, 2008) to prove lower bounds on the border rank of the matrix multiplication tensor. A key ingredient was information about certain Kronecker coefficients. While tensors are an interesting test bed for GCT ideas, the far-away goal is the separation of algebraic complexity classes. The role of the Kronecker coefficients in that setting is taken by the so-called plethysm coefficients: These are the multiplicities in the coordinate rings of spaces of polynomials. Even though several hardness results for Kronecker coefficients are known, there are almost no results about the complexity of computing the plethysm coefficients or even deciding their positivity. In this paper, we show that deciding positivity of plethysm coefficients is NP-hard and that computing plethysm coefficients is #P-hard. In fact, both problems remain hard even if the inner parameter of the plethysm coefficient is fixed. In this way, we obtain an inner versus outer contrast: If the outer parameter of the plethysm coefficient is fixed, then the plethysm coefficient can be computed in polynomial time. Moreover, we derive new lower and upper bounds and in special cases even combinatorial descriptions for plethysm coefficients, which we consider to be of independent interest. Our technique uses discrete tomography in a more refined way than the recent work on Kronecker coefficients by Ikenmeyer, Mulmuley, and Walter (Comput Compl 2017). This makes our work the first to apply techniques from discrete tomography to the study of plethysm coefficients. Quite surprisingly, that interpretation also leads to new equalities between certain plethysm coefficients and Kronecker coefficients. PubDate: 2020-11-04 DOI: 10.1007/s00037-020-00198-4

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We show that the counting class LWPP remains unchanged even if one allows a polynomial number of gap values rather than one. On the other hand, we show that it is impossible to improve this from polynomially many gap values to a superpolynomial number of gap values by relativizable proof techniques. The first of these results implies that the Legitimate Deck Problem (from the study of graph reconstruction) is in LWPP (and thus low for PP, i.e., \(\rm PP^{Legitimate Deck} = PP\) ) if the weakened version of the Reconstruction Conjecture holds in which the number of nonisomorphic preimages is assumed merely to be polynomially bounded. This strengthens the 1992 result of Köbler, Schöning & Torán that the Legitimate Deck Problem is in LWPP if the Reconstruction Conjecture holds, and provides strengthened evidence that the Legitimate Deck Problem is not NP-hard. We additionally show on the one hand that our LWPP robustness result also holds for WPP, and also holds even when one allows both the rejection and acceptance gap-value targets to simultaneously be polynomial-sized lists; yet on the other hand, we show that for the \(\#{\rm P}\) -based analogue of LWPP the behavior much differs in that, in some relativized worlds, even two target values already yield a richer class than one value does. Despite that nonrobustness result for a \(\#{\rm P}\) -based class, we show that the \(\#{\rm P}\) -based “exact counting” class \({\rm C}_{=}{\rm P}\) remains unchanged even if one allows a polynomial number of target values for the number of accepting paths of the machine. PubDate: 2020-10-29 DOI: 10.1007/s00037-020-00197-5

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract We show that strong-enough lower bounds on monotone arithmetic circuits or the nonnegative rank of a matrix imply unconditional lower bounds in arithmetic or Boolean circuit complexity. First, we show that if a polynomial \(f\in \mathbb {R}[x_1,\dots , x_n]\) of degree d has an arithmetic circuit of size s then \((x_1+\dots +x_n+1)^d+\epsilon f\) has a monotone arithmetic circuit of size \(O(sd^2+n\log n)\) , for some \(\epsilon >0\) . Second, if \(f:\{0,1\}^n\rightarrow \{0,1\}\) is a Boolean function, we associate with f an explicit exponential-size matrix M(f) such that the Boolean circuit size of f is at least \(\varOmega (\min _{\epsilon >0}(\mathrm{rk}_{+}(M(f)-\epsilon J))- 2n)\) , where J is the all-ones matrix and \(\mathrm{rk}_{+}\) denotes the nonnegative rank of a matrix. In fact, the quantity \(\min _{\epsilon >0}(\mathrm{rk}_{+}(M(f)-\epsilon J))\) characterizes how hard is it to distinguish rejecting and accepting inputs of f by means of a linear program. Finally, we introduce a proof system resembling the monotone calculus of Atserias et al. (J Comput Syst Sci 65:626–638, 2002) and show that similar \(\epsilon \) -sensitive lower bounds on monotone arithmetic circuits imply lower bounds on proof-size in the system. PubDate: 2020-07-25 DOI: 10.1007/s00037-020-00196-6

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The 2-closure \(\overline{G}\) of a permutation group G on \(\Omega\) is defined to be the largest permutation group on \(\Omega\) , having the same orbits on \(\Omega \times \Omega\) as G. It is proved that ifG is supersolvable, then \(\overline{G}\) can be found in polynomial time in \( \Omega \) . As a by-product of our technique, it is shown that the composition factors of \(\overline{G}\) are cyclic or alternating. PubDate: 2020-06-24 DOI: 10.1007/s00037-020-00195-7

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., \(\textbf{P} \not\subseteq \textbf{NC}^{1}\) ). Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4):191–204, 1995) suggested to approach this problem by proving that depth complexity behaves ``as expected'' with respect to the composition of functions f ◊ g. They showed that the validity of this conjecture would imply that \(\textbf{P} \not\subseteq \textbf{NC}^{1}\) . As a way to realize this program, Edmonds et al. (Computational Complexity 10(3):210–246, 2001) suggested to study the ``multiplexor relation'' MUX. In this paper, we present two results regarding this relation: The multiplexor relation is ``complete'' for the approach of Karchmer et al. in the following sense: if we could prove (a variant of) their conjecture for the composition f ◊ MUX for every function f, then this would imply \(\textbf{P} \not\subseteq \textbf{NC}^{1}\) . A simpler proof of a lower bound for the multiplexor relation due to Edmonds et al. Our proof has the additional benefit of fitting better with the machinery used in previous works on the subject. PubDate: 2020-06-06 DOI: 10.1007/s00037-020-00194-8