Hubbry Logo
logo
Circuit complexity
Community hub

Circuit complexity

logo
0 subscribers
Read side by side
from Wikipedia

Example Boolean circuit. The nodes are AND gates, the nodes are OR gates, and the nodes are NOT gates.

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits (see below).

Proving lower bounds on size of Boolean circuits computing explicit Boolean functions is a popular approach to separating complexity classes. For example, a prominent circuit class P/poly consists of Boolean functions computable by circuits of polynomial size. Proving that would separate P and NP (see below).

Complexity classes defined in terms of Boolean circuits include AC0, AC, TC0, NC1, NC, and P/poly.

Size and depth

[edit]

A Boolean circuit with input bits is a directed acyclic graph in which every node (usually called gates in this context) is either an input node of in-degree 0 labelled by one of the input bits, an AND gate, an OR gate, or a NOT gate. One of these gates is designated as the output gate. Such a circuit naturally computes a function of its inputs. The size of a circuit is the number of gates it contains and its depth is the maximal length of a path from an input gate to the output gate.

There are two major notions of circuit complexity.[1] The circuit-size complexity of a Boolean function is the minimal size of any circuit computing . The circuit-depth complexity of a Boolean function is the minimal depth of any circuit computing .

These notions generalize when one considers the circuit complexity of any language that contains strings with different bit lengths, especially infinite formal languages. Boolean circuits, however, only allow a fixed number of input bits. Thus, no single Boolean circuit is capable of deciding such a language. To account for this possibility, one considers families of circuits where each accepts inputs of size . Each circuit family will naturally generate the language by circuit outputting when a length string is a member of the family, and otherwise. We say that a family of circuits is size minimal if there is no other family that decides on inputs of any size, , with a circuit of smaller size than (respectively for depth minimal families). Thus, circuit complexity is meaningful even for non-recursive languages. The notion of a uniform family enables variants of circuit complexity to be related to algorithm based complexity measures of recursive languages. However, the non-uniform variant is helpful to find lower bounds on how complex any circuit family must be in order to decide given languages.

Hence, the circuit-size complexity of a formal language is defined as the function , that relates a bit length of an input, , to the circuit-size complexity of a minimal circuit that decides whether inputs of that length are in . The circuit-depth complexity is defined similarly.

Uniformity

[edit]

Boolean circuits are one of the prime examples of so-called non-uniform models of computation in the sense that inputs of different lengths are processed by different circuits, in contrast with uniform models such as Turing machines where the same computational device is used for all possible input lengths. An individual computational problem is thus associated with a particular family of Boolean circuits where each is the circuit handling inputs of n bits. A uniformity condition is often imposed on these families, requiring the existence of some possibly resource-bounded Turing machine that, on input n, produces a description of the individual circuit . When this Turing machine has a running time polynomial in n, the circuit family is said to be P-uniform. The stricter requirement of DLOGTIME-uniformity is of particular interest in the study of shallow-depth circuit-classes such as AC0 or TC0. When no resource bounds are specified, a language is recursive (i.e., decidable by a Turing machine) if and only if the language is decided by a uniform family of Boolean circuits.

Polynomial-time uniform

[edit]

A family of Boolean circuits is polynomial-time uniform if there exists a deterministic Turing machine M, such that

  • M runs in polynomial time
  • For all , M outputs a description of on input

Logspace uniform

[edit]

A family of Boolean circuits is logspace uniform if there exists a deterministic Turing machine M, such that

  • M runs in logarithmic work space (i.e. M is a log-space transducer)
  • For all , M outputs a description of on input

History

[edit]

Circuit complexity goes back to Shannon in 1949,[2] who proved that almost all Boolean functions on n variables require circuits of size Θ(2n/n). Despite this fact, complexity theorists have so far been unable to prove a superlinear lower bound for any explicit function.

Superpolynomial lower bounds have been proved under certain restrictions on the family of circuits used. The first function for which superpolynomial circuit lower bounds were shown was the parity function, which computes the sum of its input bits modulo 2. The fact that parity is not contained in AC0 was first established independently by Ajtai in 1983[3][4] and by Furst, Saxe and Sipser in 1984.[5] Later improvements by Håstad in 1987[6] established that any family of constant-depth circuits computing the parity function requires exponential size. Extending a result of Razborov,[7] Smolensky in 1987[8] proved that this is true even if the circuit is augmented with gates computing the sum of its input bits modulo some odd prime p.

The k-clique problem is to decide whether a given graph on n vertices has a clique of size k. For any particular choice of the constants n and k, the graph can be encoded in binary using bits, which indicate for each possible edge whether it is present. Then the k-clique problem is formalized as a function such that outputs 1 if and only if the graph encoded by the string contains a clique of size k. This family of functions is monotone and can be computed by a family of circuits, but it has been shown that it cannot be computed by a polynomial-size family of monotone circuits (that is, circuits with AND and OR gates but without negation). The original result of Razborov in 1985[7] was later improved to an exponential-size lower bound by Alon and Boppana in 1987.[9] In 2008, Rossman[10] showed that constant-depth circuits with AND, OR, and NOT gates require size to solve the k-clique problem even in the average case. Moreover, there is a circuit of size that computes .

In 1999, Raz and McKenzie later showed that the monotone NC hierarchy is infinite.[11]

The Integer Division Problem lies in uniform TC0.[12]

Circuit lower bounds

[edit]

Circuit lower bounds are generally difficult. Known results include

  • Parity is not in nonuniform AC0, proved by Ajtai in 1983[3][4] as well as by Furst, Saxe and Sipser in 1984.[5]
  • Uniform TC0 is strictly contained in PP, proved by Allender.[13]
  • The classes SP
    2
    , PP[nb 1] and MA/1[14] (MA with one bit of advice) are not in SIZE(nk) for any constant k.
  • While it is suspected that the nonuniform class ACC0 does not contain the majority function, it was only in 2010 that Williams proved that .[15]

It is open whether NEXPTIME has nonuniform TC0 circuits.

Proofs of circuit lower bounds are strongly connected to derandomization. A proof that would imply that either or that permanent cannot be computed by nonuniform arithmetic circuits (polynomials) of polynomial size and polynomial degree.[16]

In 1997, Razborov and Rudich showed that many known circuit lower bounds for explicit Boolean functions imply the existence of so called natural properties useful against the respective circuit class.[17] On the other hand, natural properties useful against P/poly would break strong pseudorandom generators. This is often interpreted as a "natural proofs" barrier for proving strong circuit lower bounds. In 2016, Carmosino, Impagliazzo, Kabanets and Kolokolova proved that natural properties can be also used to construct efficient learning algorithms.[18]

Complexity classes

[edit]

Many circuit complexity classes are defined in terms of class hierarchies. For each non-negative integer i, there is a class NCi, consisting of polynomial-size circuits of depth , using bounded fan-in AND, OR, and NOT gates. The union NC of all of these classes is a subject to discussion. By considering unbounded fan-in gates, the classes ACi and AC (which is equal to NC) can be constructed. Many other circuit complexity classes with the same size and depth restrictions can be constructed by allowing different sets of gates.

Relation to time complexity

[edit]

If a certain language, , belongs to the time-complexity class for some function , then has circuit complexity . If the Turing Machine that accepts the language is oblivious (meaning that it reads and writes the same memory cells regardless of input), then has circuit complexity .[19]

Monotone circuits

[edit]

A monotone Boolean circuit is one that has only AND and OR gates, but no NOT gates. A monotone circuit can only compute a monotone Boolean function, which is a function where for every , , where means that for all .

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Circuit complexity is a subfield of computational complexity theory that investigates the minimal resources, such as size and depth, required to compute Boolean functions using Boolean circuits composed of basic logic gates like AND, OR, and NOT.[1] These circuits are modeled as directed acyclic graphs where nodes represent gates with bounded or unbounded fan-in, inputs are binary variables, and the output computes a target function $ f: {0,1}^n \to {0,1} $.[2] The size of a circuit is defined as the total number of gates, reflecting the computational effort, while the depth measures the longest path from inputs to output, indicating parallel computation time.[3] Originating in the 1940s with Claude Shannon's foundational work on the complexity of switching circuits, the field has evolved to address fundamental questions in computation, including the separation of complexity classes like P and NP.[3] A key distinction is between non-uniform circuit families, which allow different circuits for each input length $ n $ and relate to the class P/poly, and uniform circuits, where the circuit for each $ n $ can be generated by a polynomial-time Turing machine, aligning more closely with standard complexity classes like P.[1] Circuit complexity over complete bases, such as those including NAND gates, is equivalent up to constants for functions computable by constant-fan-in circuits.[1] Notable results include exponential lower bounds for constant-depth circuits with polynomial size (AC^0 class), such as the parity function requiring superpolynomial size, proven by Furst, Saxe, Sipser, and Ajtai in the early 1980s.[3] Monotone circuit complexity, restricting to AND and OR gates without negation, has yielded superpolynomial lower bounds for functions like the clique problem, with sizes at least $ 2^{\Omega(n^{1/3})} $.[2] These advances separate restricted classes like AC^0 from others, such as TC^0 or ACC^0, using methods like gate elimination and approximation by low-degree polynomials.[3] Despite progress on restricted models, major open problems persist, including whether there exists a superpolynomial lower bound for general (unrestricted) circuits computing NP-complete problems, which would imply P ≠ NP.[1] The best known general lower bound remains linear, around 4.5n for certain functions, highlighting the field's challenges in proving strong separations.[3] Circuit complexity continues to influence areas like proof complexity and derandomization, providing nonuniform analogues to uniform models of computation.[2]

Basic Concepts

Boolean Circuits

In computational complexity theory, a Boolean circuit is formally defined as a directed acyclic graph (DAG) consisting of vertices representing either input nodes, gate nodes, or output nodes.[4] Input nodes correspond to the variables of the Boolean function being computed, with no incoming edges, while output nodes have no outgoing edges and represent the result. Gate nodes are labeled with Boolean operations, typically from the basis {∧ (AND), ∨ (OR), ¬ (NOT)}, and compute the function applied to the values propagating from their predecessors. Each circuit computes a Boolean function by assigning truth values (0 or 1) to the inputs and evaluating the gates in topological order until reaching the output.[4][5] The fan-in of a gate refers to the number of incoming edges, determining its arity: NOT gates have fan-in 1, while AND and OR gates conventionally have fan-in 2 in standard models, though extensions allow unbounded fan-in for generality. Fan-out, conversely, is the number of outgoing edges from a gate, which can be unbounded, enabling the reuse of intermediate results across multiple subsequent gates—a key feature distinguishing circuits from tree-like formulas. This structure allows circuits to model efficient parallel computations by sharing subcomputations.[4] For decision problems over languages, circuit complexity employs families of circuits {C_n}_{n ≥ 1}, where each C_n is a Boolean circuit with exactly n input nodes (corresponding to binary strings of length n) and a single output node that accepts (outputs 1) if and only if the input string belongs to the language. A language L ⊆ {0,1}^* is recognized by {C_n} if for every n and x ∈ {0,1}^n, C_n(x) = 1 precisely when x ∈ L. This framework captures non-uniform computation, where circuit designs can vary arbitrarily with input size.[4] Simple examples illustrate the model: a NOT circuit consists of a single gate with one input and one output, inverting the input bit; an AND circuit has two input nodes feeding into a single AND gate outputting their conjunction; similarly, an OR circuit outputs the disjunction of two inputs. More complex functions arise via composition; for instance, the parity function (outputting 1 if the number of 1s in the n-bit input is odd) can be computed by a balanced binary tree of XOR gates (where XOR is realized as (a ∧ ¬b) ∨ (¬a ∧ b), using AND, OR, and NOT), with n-1 gates in total.[4] A fundamental computational task associated with Boolean circuits is the circuit value problem (CVP), which, given a circuit C, an input assignment to its variables, and a designated gate g, asks for the Boolean value computed at g under that assignment (or equivalently at the output for the standard variant). This problem is solvable in polynomial time by topological evaluation but is P-complete under log-space many-one reductions, making it a canonical hard problem for deterministic polynomial-time computation.[5]

Size and Depth

In Boolean circuit complexity, the size of a circuit CC, denoted C|C| or s(C)s(C), is the total number of gates in the circuit.[4] This measure quantifies the overall computational resources required, analogous to the number of operations in a sequential algorithm. The depth of a circuit CC, denoted d(C)d(C), is the length of the longest directed path from an input node to an output node.[4] Depth captures the inherent parallelism of the computation, as it represents the minimum number of sequential steps needed when gates can be evaluated simultaneously in parallel.[6] For a Boolean function f:{0,1}n{0,1}f: \{0,1\}^n \to \{0,1\}, the circuit size s(f)s(f) is the minimum C|C| over all circuits CC that compute ff, and the circuit depth d(f)d(f) is defined analogously as the minimum d(C)d(C).[6] These extend to circuit families {Cn}\{C_n\} for functions on nn inputs, where the size s(f(n))s(f(n)) is the minimum over circuits CnC_n computing ff on nn bits, and similarly for depth.[1] A circuit family has uniform size O(s(n))O(s(n)) if Cncs(n)|C_n| \leq c \cdot s(n) for some constant c>0c > 0 and all sufficiently large nn.[7] Size and depth exhibit fundamental trade-offs that reflect choices between resource efficiency and computational speed. A circuit with small size but large depth prioritizes minimal gates at the expense of longer sequential execution, as in serial computations where operations are chained linearly.[6] Conversely, balanced size and depth enable greater parallelism, allowing multiple operations to proceed concurrently, though often requiring more gates overall. For instance, the elementary schoolbook method for adding two nn-bit numbers yields a circuit of size O(n)O(n) but depth O(n)O(n), simulating a serial carry propagation, while parallel addition techniques achieve depth O(logn)O(\log n) with size O(nlogn)O(n \log n).[6] In general, any circuit satisfies d(C)Cd(C) \leq |C|, illustrating how reducing depth can increase size.[7] These measures define efficient circuits in non-uniform models, where polynomial size—specifically, s(f(n))=O(nk)s(f(n)) = O(n^k) for some constant kk—indicates feasible computation for large inputs, even if depth is superlogarithmic.[4] Depth further refines efficiency by bounding parallel time, with logarithmic depth enabling highly concurrent evaluations suitable for hardware implementations.[6]

Uniformity Conditions

Polynomial-Time Uniformity

Polynomial-time uniformity, denoted as $ U_P $, imposes a computational constraint on circuit families to ensure their descriptions are constructible in a feasible manner. A family of circuits $ {C_n}_{n \geq 1} $ is $ U_P $-uniform if there exists a deterministic Turing machine $ M $ that runs in time polynomial in $ n $ and, on input $ 1^n $ (the unary encoding of $ n $), outputs the encoding $ E_n $ of the circuit $ C_n $. Equivalently, $ M $ can output the $ i $-th bit of $ E_n $ in polynomial time when given $ 1^n $ and an index $ i $ ranging from 1 to the length of $ E_n $, which is itself polynomial in $ n $ since circuit sizes are assumed polynomial.90038-6) This uniformity condition bridges non-uniform circuit models to uniform complexity classes like P by requiring that circuit constructions do not rely on arbitrary or non-constructive advice. In non-uniform settings, circuit families allow arbitrary descriptions for each $ n $ without any bound on how those descriptions are produced, potentially incorporating input-size-specific optimizations that cannot be algorithmically generated efficiently. In contrast, $ U_P $-uniformity ensures the circuit family is generated by a single polynomial-time algorithm, making it suitable for modeling realistic computational resources where hardware designs must be computable.90038-6) One key advantage of $ U_P $-uniformity is its equivalence to polynomial-time Turing machines for simulating computations: any language in P can be decided by a $ U_P $-uniform family of polynomial-size circuits, and conversely, such circuit families can be simulated by polynomial-time machines by generating and evaluating the circuit on the input. For example, the parity function (computing the XOR of $ n $ bits), which requires linear circuit size, admits a $ U_P $-uniform family where a polynomial-time Turing machine constructs a balanced XOR tree by recursively defining gate connections based on the input size $ n $. This construction highlights how $ U_P $-uniformity captures problems solvable in polynomial time without extraneous non-computable elements.90038-6)[4]

Logspace Uniformity

Logspace uniformity, also known as L-uniformity, imposes a strict condition on circuit families to ensure they can be generated using limited computational resources, specifically logarithmic space. A family of circuits {Cn}n1\{C_n\}_{n \geq 1} is logspace-uniform if there exists a deterministic Turing machine that, on input 1n1^n (the unary encoding of nn) and an index ii specifying a gate or connection, outputs the relevant description bit of CnC_n while using only O(logn)O(\log n) space.[4] Equivalently, auxiliary functions such as SIZE(n)\mathsf{SIZE}(n) (number of gates), TYPE(n,i)\mathsf{TYPE}(n, i) (gate type at position ii), and INPUT(n,i,j)\mathsf{INPUT}(n, i, j) (inputs to gate ii) must all be computable in O(logn)O(\log n) space.[8] This condition, often denoted as ULU_L, contrasts with less restrictive uniformities by bounding not just time but space during circuit construction.[9] The primary motivation for logspace uniformity arises in modeling highly parallel computations where circuit descriptions must be efficiently producible without excessive resource demands, aligning with the needs of parallel architectures that prioritize low-depth processing.[4] It captures "uniform" families that simulate feasible parallel machines, avoiding the unrealistic assumption of arbitrary circuit designs in non-uniform models.[9] This uniformity is central to defining complexity classes like NC, which consist of problems solvable by logspace-uniform families of polynomial-size, polylogarithmic-depth circuits, thereby linking circuit complexity to parallel time hierarchies.[4] Representative examples of logspace-uniform circuit families include those for parallel prefix computation, such as the Brent-Kung or Ladner-Fischer networks, which compute prefix sums or products in logarithmic depth and can have their gate descriptions generated via a logspace machine by recursively specifying wire connections based on input size.[8] Similarly, Batcher's odd-even mergesort network provides a logspace-uniform construction for sorting nn elements using a fixed, recursive merging pattern that a logspace transducer can output gate-by-gate.[9] Logspace uniformity defines a proper subset of P-uniform circuit families, as any logspace-computable description is also poly-time computable, but the converse does not hold: there exist P-uniform families whose construction requires more than logarithmic space, such as those encoding solutions to space-bounded problems during generation.[4] This stricter space bound makes logspace uniformity particularly suited for capturing the essence of efficient parallelism, unlike the weaker polynomial-time uniformity that allows unbounded space in construction.[10]

Complexity Classes

Non-Uniform Classes

Non-uniform complexity classes in circuit complexity dispense with uniformity conditions, allowing circuit families where each circuit CnC_n for input length nn can be arbitrarily designed, as long as the size remains polynomial in nn. This models computation with "advice" that depends on the input length but not on the specific input, enabling the capture of problems beyond uniform polynomial-time solvable ones.[11] The central non-uniform class is P/poly\mathbf{P}/\text{poly}, defined as the set of languages L{0,1}L \subseteq \{0,1\}^* such that there exists a polynomial pp where for every nn, there is a Boolean circuit CnC_n of size at most p(n)p(n) that decides LL on all inputs of length nn. Formally,
LP/poly     polynomial p s.t. n,Cn with Cnp(n) deciding L{0,1}n. L \in \mathbf{P}/\text{poly} \iff \exists \text{ polynomial } p \text{ s.t. } \forall n, \exists C_n \text{ with } |C_n| \leq p(n) \text{ deciding } L \cap \{0,1\}^n.
This class includes all languages in P\mathbf{P}, since any polynomial-time Turing machine can be simulated by a polynomial-size circuit family without uniformity, but P/poly\mathbf{P}/\text{poly} is strictly larger in general, containing undecidable languages like the unary halting problem by hard-coding solutions for each nn.[11][4] A key example of non-uniformity in P/poly\mathbf{P}/\text{poly} is the use of hard-coded lookup tables for small nn, where the circuit CnC_n embeds a truth table of size 2n2^n but scales polynomially overall by exploiting the fixed nn for each family member; for instance, the unary language of halting Turing machines on empty input can be decided by circuits that precompute the answer for each length nn. Whether P=P/poly\mathbf{P} = \mathbf{P}/\text{poly} remains open, with separations implying significant nonuniform advice is needed for certain problems. The class also relates to P\mathbf{P} vs. NP\mathbf{NP}: if NPP/poly\mathbf{NP} \subseteq \mathbf{P}/\text{poly}, the polynomial hierarchy collapses to its second level Σ2p\Sigma_2^p.[11][12] Derandomization implications arise because BPPP/poly\mathbf{BPP} \subseteq \mathbf{P}/\text{poly}, meaning any bounded-error probabilistic polynomial-time language has polynomial-size circuits; thus, derandomizing BPP\mathbf{BPP} to P\mathbf{P} would require proving superpolynomial circuit lower bounds for some problem, connecting non-uniform classes to randomized computation barriers.[13] Other notable non-uniform classes include AC0/poly\mathbf{AC}^0/\text{poly}, comprising constant-depth, polynomial-size circuits with unbounded fan-in AND, OR, and NOT gates, which capture highly parallel but shallow computations and are properly contained in P/poly\mathbf{P}/\text{poly}. Similarly, MAJ/poly\mathbf{MAJ}/\text{poly} consists of polynomial-size circuits built from majority gates (outputting 1 if at least half the inputs are 1), along with AND, OR, and NOT, offering a model for threshold-based decisions with non-uniform advice. These classes highlight the spectrum of non-uniform power, from shallow to general polynomial resources.[11][14]

Uniform Classes

Uniform circuit classes impose restrictions on the construction of circuit families to ensure that the circuits can be efficiently described by a Turing machine, typically under logspace uniformity, where a logarithmic-space Turing machine can output the circuit description in polynomial time. The class NC^k, for a fixed integer k ≥ 1, consists of decision problems solvable by logspace-uniform families of Boolean circuits of polynomial size, depth O(\log^k n), and fan-in at most 2. The class NC is defined as the union over all k of NC^k, capturing problems that admit efficient parallel algorithms in the sense of polylogarithmic parallel time on a bounded-fan-in model. Related uniform circuit classes extend NC by incorporating more powerful gates to model different parallel architectures, with the inclusions NC^k ⊆ AC^k ⊆ TC^k. The class AC^k comprises logspace-uniform families of polynomial-size circuits of depth O(\log^k n) built from unbounded fan-in AND and OR gates (with NOT gates only at the inputs).[15] Similarly, TC^k allows threshold gates, which output 1 if at least half of the inputs are 1, in addition to AND, OR, and NOT, again with polynomial size and depth O(\log^k n) under logspace uniformity.[15] These classes highlight trade-offs in gate power versus depth, with AC and TC often coinciding with NC in expressive power for many problems. Representative examples illustrate the power of these classes. Integer multiplication of two n-bit numbers can be performed by logspace-uniform circuits of polynomial size and depth O(\log^2 n), placing it in NC^2. Sorting n integers is achievable using the AKS sorting network, which yields logspace-uniform circuits of polynomial size and depth O(\log n), though practical implementations often target higher levels like NC^3 for simplicity in construction. Formally, for inputs of length n, a language L is in NC^k if there exists a logspace-uniform family {C_n} such that |C_n| is polynomial in n, the depth d(n) = O(\log^k n), and C_n accepts exactly the strings in L of length n. A key property is that NC \subseteq P, as any polynomial-size uniform circuit family can be simulated by a deterministic Turing machine in polynomial time by evaluating the circuit layer by layer. This inclusion underscores the parallel-sequential trade-off: problems in NC can be solved sequentially in polynomial time but offer the potential for massive parallelism with polylogarithmic depth, enabling efficient computation on models like the PRAM under logspace uniformity.

Circuit Lower Bounds

General Techniques

One of the foundational techniques for establishing lower bounds on Boolean circuit size is the counting argument, which compares the total number of possible Boolean functions on nn inputs, which is 22n2^{2^n}, to the number of distinct functions computable by circuits of a given size ss. This approach shows that most functions require large circuits, as small circuits cannot cover all possibilities. Specifically, Shannon proved that almost every Boolean function requires circuit size Ω(2n/n)\Omega(2^n / n) by bounding the number of circuits of size ss above nn as at most ((n+3)s2)s((n+3)s^2)^s, leading to the conclusion that for s=o(2n/n)s = o(2^n / n), the number of such circuits is far smaller than 22n2^{2^n}.[16] Gate elimination is another key method, which simplifies a circuit by identifying and removing gates that compute constant values or depend on a single input after applying partial assignments to the variables. The process iteratively eliminates such gates, reducing the circuit's size while preserving the computation on the remaining free variables; each elimination step typically removes a constant number of gates, yielding a linear lower bound when repeated sufficiently many times. This technique relies on the structure of the circuit and the function's sensitivity to input changes, often combined with restrictions to force many gates into eliminable states. For instance, in circuits over the standard basis of AND, OR, and NOT gates with fan-in 2, gate elimination can establish superlinear size requirements for functions that avoid constant subcomputations under partial fixes.[17][18] Random restrictions provide a probabilistic tool to prove lower bounds by randomly fixing a fraction of the input variables to 0 or 1, which simplifies the circuit while maintaining the hardness of the restricted function with high probability. Under a random restriction leaving tt free variables (where tnt \ll n), many gates become constant or depend on fewer inputs, effectively reducing the circuit's depth or width; this can be iterated to collapse the circuit to a simpler form, such as a small decision tree or constant-depth structure, against which the function's complexity can be argued. The method's power lies in concentration bounds ensuring that the restricted function remains non-trivial, often using union bounds over circuit topologies to show that small circuits fail to compute it.[19][20] Distinguishing formula size from circuit size is crucial, as formulas impose a tree-like structure without fan-out sharing, making them a stricter model that often yields stronger lower bounds via combinatorial restrictions. Lower bounds for formula size, such as Shannon's Ω(2n/logn)\Omega(2^n / \log n), follow similar counting arguments but with tighter estimates on the number of tree-structured computations, bounded by roughly (4n)s(4n)^s for size ss. More advanced combinatorial methods, including applications of the Kruskal-Katona theorem from extremal set theory to analyze shadows or unions in the formula's subfunction lattice, provide refined bounds for specific cases by minimizing the "coverage" of small formulas over the Boolean lattice. These techniques highlight that while circuits can reuse subcircuits to achieve smaller sizes, formulas require exponential growth for most functions due to their acyclic, non-sharing nature.[16] Despite their generality, these techniques have limitations: counting arguments establish exponential lower bounds only for random or most functions, not explicit ones of interest, while gate elimination and random restrictions typically yield only superlinear or mildly superpolynomial bounds for explicit functions, falling short of separating major complexity classes.[17][19]

Specific Results

One of the landmark results in circuit complexity is the separation of the parity function from the class AC^0. The parity function, which computes whether the number of 1s in an n-bit input is even or odd, cannot be computed by constant-depth, polynomial-size circuits. This was first shown using random restrictions by Furst, Saxe, and Sipser, who established a superpolynomial lower bound of Ω(n^{log n / log log n}) on the size of depth-d circuits computing parity. Ajtai independently proved an exponential lower bound in 1983 using similar techniques.[21] Håstad's switching lemma provides a quantitative tool for analyzing the effect of random restrictions on constant-depth circuits, enabling tighter bounds. The lemma states that a width-w DNF formula restricted by a random restriction with probability p of leaving a variable free simplifies to a decision tree of depth at most t with probability at most O((pw)^t). Applying this iteratively to each layer of a depth-d circuit yields an almost-optimal size lower bound of exp(Ω(n^{1/(d-1)})) for parity in AC^0. For constant depth d, this gives a lower bound of 2^{Ω(n^{1/(d-1)})}.[22] Another key explicit separation involves the k-clique problem, which determines if a graph on n vertices contains a clique of size k. Razborov proved in 1985 that monotone circuits computing k-clique require superpolynomial size, specifically Ω(n^{k/16}) for k up to about n^{1/2}, using the approximation method that bounds the number of useful subcircuits via combinatorial approximations of monotone functions.[23] Extending this to non-monotone AC^0 circuits while avoiding the natural proofs barrier, Rossman showed in 2008 that k-clique requires AC^0 circuits of size ω(n^{k/4}) for any constant k, by combining gate-elimination arguments with correlation bounds against low-degree polynomials.[24] A significant conditional non-uniform separation shows that if NEXP ⊆ P/poly, then the polynomial hierarchy collapses, extending the Karp-Lipton result for NP via padding arguments. Using self-reducibility of SAT, Karp and Lipton showed in 1980 that if NP ⊆ P/poly, then the polynomial hierarchy collapses to its second level; padding arguments extend this to imply that NEXP ⊆ P/poly would cause a similar collapse for exponential-time classes, but improved analyses via padding yield almost-exponential separations assuming no such collapse.[25] For TC^0, which augments AC^0 with majority gates, separations are more modest but include exponential lower bounds for specific functions like the permanent. Jukna and others have shown that computing the permanent requires TC^0 circuits of size exp(Ω(n^{1/3})) using combinatorial nullstellensatz techniques to bound the influence of gates.[26] In 2024, new results established maximum circuit lower bounds of at least 2^n / n for the class AMEXP / 2^{n^ε} of exponential-time Arthur-Merlin protocols with sub-exponential advice.[27] Despite progress on these Boolean separations, the algebraic analog—whether VP equals VNP—remains unresolved, with no polynomial-size arithmetic circuits known for the permanent despite Valiant's completeness result placing it in VNP.[28]

Monotone Circuits

Monotone circuits are a restricted model of Boolean circuits that use only AND and OR gates, without negation (NOT gates). They compute monotone Boolean functions, which satisfy the property that if $ x \leq y $ (bitwise), then $ f(x) \leq f(y) $. These circuits are particularly relevant for studying combinatorial problems like the clique function, as many natural problems in graph theory are monotone.[2] The study of monotone circuit complexity aims to determine the minimal size required to compute such functions. Unlike general circuits, where strong lower bounds are elusive, monotone circuits have yielded significant results. In 1985, Alexander Razborov proved the first superpolynomial lower bound for monotone circuits, showing that the clique function (detecting a $ k $-clique in an $ n $-vertex graph, with $ k = \Theta(n^{1/3}) $) requires monotone circuit size at least $ 2^{\Omega(n^{1/3})} $. This bound was later improved by Andreev, Alon, and Boppana to exponential lower bounds for certain parameters.[2][3] Other notable lower bounds include $ \Omega(n \log n) $ for binary sorting and merging functions, and a linear bound of approximately 3.5n for the majority function. These results highlight the gap between monotone and general circuit complexity and have implications for separating complexity classes under monotone restrictions. Monotone circuit complexity also connects to proof complexity and combinatorial optimization.[2]

Historical Development

Circuit complexity originated in the late 1940s with Claude Shannon's seminal 1949 work on the synthesis of Boolean switching circuits, where he demonstrated that almost all Boolean functions require circuits of exponential size, albeit through a non-constructive counting argument.[29] This laid the foundation for analyzing computational resources in terms of circuit size and depth. In the 1970s, progress included the Meyer-Stockmeyer theorem, which established super-exponential lower bounds for certain logical problems using circuits.[1] The 1980s marked a surge in results on restricted circuit classes. In 1983, Miklós Ajtai proved that the parity function cannot be computed by polynomial-size constant-depth circuits (AC^0), a breakthrough later extended by Stephen Furst, James Saxe, and Michael Sipser in 1984 using approximation by low-degree polynomials.[29] Alexander Razborov, in 1985, showed that the clique function requires superpolynomial-size monotone circuits, providing the first explicit superpolynomial lower bound for a combinatorial problem.[30] Subsequent developments included Johan Håstad's 1985 optimal bounds for parity in AC^0 circuits and Razborov's 1987 result on majority gates requiring exponential size in certain restricted classes.[1] In 1994, Razborov and Steven Rudich introduced the natural proofs barrier, explaining why common techniques for proving lower bounds might fail for general circuits due to implications for cryptography.[30] These advances, while significant for restricted models, have not yet yielded superpolynomial lower bounds for unrestricted circuits, leaving major questions open as of 2023.[29]

Recent Advances and Applications

Constant-Depth Circuits

In the 2010s, Ryan Williams made significant progress on lower bounds for constant-depth circuits with modular gates, showing that NEXP does not have non-uniform ACC^0 circuits of polynomial size.[31] This result extends to ACC^0[p] for any fixed prime p, as the proof relies on the structure of modular gates with prime modulus.[31] The approach circumvents natural proof barriers by leveraging faster algorithms for #SAT on ACC^0 circuits, demonstrating that improving exhaustive search for satisfiability implies superpolynomial lower bounds.[31] In the 2020s, direct size lower bounds for TC^0 circuits have been established using algebraic techniques. For instance, superlinear size lower bounds have been proven for restricted TC^0 circuits computing explicit hard functions, building on gate elimination methods. Additionally, exponential lower bounds for E^NP against GCC^0 circuits have been shown using lifted switching lemmas that extend classical AC^0 results to generalized classes without parameter loss.[32] No full collapses of AC^0[p] have been shown, but improved barriers for moduli have refined the limitations of algebraic techniques against ACC^0[p] for composite p.[32] For example, superpolynomial size lower bounds have been established for TC^0 against explicit functions in E_NP, using lifted switching lemmas that extend classical AC^0 results to threshold gates without parameter loss.[32] Key techniques in these advances include algebraic methods such as gate-scraping, which iteratively eliminates gates to reduce circuit size while preserving functionality, and variants of the combinatorial Nullstellensatz to bound the degree of approximating polynomials for modular gates.[33] These combinatorial tools help prove that constant-depth circuits cannot approximate certain functions with low-degree polynomials over finite fields. An important open question remains the full resolution of whether TC^0 contains L, the class of languages accepted by logarithmic space Turing machines, as current lower bounds fall short of separating these classes.[32]

Connections to Other Fields

Circuit complexity has profound implications for learning theory, particularly in establishing limitations on the learnability of Boolean functions. Lower bounds for constant-depth circuits, such as those in AC^0, demonstrate that certain functions like parity cannot be approximated by small AC^0 circuits, implying that algorithms restricted to AC^0 hypothesis classes cannot learn such functions efficiently from random examples. This connection is highlighted in work showing that AC^0 lower bounds limit the expressive power of learnable circuit classes, with extensions beyond AC^0 requiring augmented models like majority gates at the output. Furthermore, techniques from circuit lower bounds, including natural proofs, have been adapted to agnostic learning settings, where tolerant natural proofs yield algorithms for learning under approximate assumptions, but also reveal barriers to learning more powerful circuit classes in the agnostic regime.[34][35] In cryptography, circuit complexity underpins the construction of pseudorandom generators that fool non-uniform classes like P/poly. The Nisan-Wigderson generator constructs such PRGs from functions in E that require superpolynomial circuit size, enabling derandomization of probabilistic algorithms while preserving security against polynomial-sized circuits. A key implication arises if P ≠ P/poly: the existence of problems requiring large circuits implies the existence of one-way functions (OWFs), as non-uniform advice cannot efficiently invert hard functions. Formally, if every language in E requires circuits of size 2Ω(n)2^{\Omega(n)}, then OWFs exist, providing a foundation for cryptographic primitives like encryption and signatures.[36][37] Circuit complexity also intersects with proof complexity, where the simulation power of proof systems mirrors circuit sizes. Frege systems, which use polynomial-sized formulas as proof lines, can simulate polynomial-sized circuits, meaning that tautologies provable in poly-size Frege proofs correspond to functions computable in P/poly. For monotone variants, Razborov's superpolynomial lower bounds on monotone circuits for the clique function extend to monotone proof systems, showing that monotone Frege proofs require superpolynomial size for certain unsatisfiable formulas encoding clique non-existence.[38][23] Recent developments in the 2020s have leveraged circuit lower bounds to establish separations in quantum learning. Specifically, efficient quantum algorithms for learning parities with noise or unitaries imply classical circuit lower bounds for related problems, aiding proofs of quantum-classical separations in learning models like shadow tomography. In cryptography, fine-grained complexity assumptions, such as the hardness of 3SUM beyond NC (log-depth parallel circuits), have enabled constructions of secure protocols against preprocessing attacks, yielding OWFs and garbled circuits with time-space tradeoffs tied to circuit depth restrictions.[39][40]

References

User Avatar
No comments yet.