Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
Arithmetic circuit complexity
In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial ?"
An arithmetic circuit over the field and the set of variables is a directed acyclic graph as follows. Every node in it with indegree zero is called an input gate and is labeled by either a variable or a field element in Every other gate is labeled by either or in the first case it is a sum gate and in the second a product gate. An arithmetic formula is a circuit in which every gate has outdegree one (and so the underlying graph is a directed tree).
A circuit has two complexity measures associated with it: size and depth. The size of a circuit is the number of gates in it, and the depth of a circuit is the length of the longest directed path in it. For example, the circuit in the figure has size six and depth two.
An arithmetic circuit computes a polynomial in the following natural way. An input gate computes the polynomial it is labeled by. A sum gate computes the sum of the polynomials computed by its children (a gate is a child of if the directed edge is in the graph). A product gate computes the product of the polynomials computed by its children. Consider the circuit in the figure, for example: the input gates compute (from left to right) and the sum gates compute and and the product gate computes
Given a polynomial we may ask ourselves what is the best way to compute it — for example, what is the smallest size of a circuit computing The answer to this question consists of two parts. The first part is finding some circuit that computes this part is usually called upper bounding the complexity of The second part is showing that no other circuit can do better; this part is called lower bounding the complexity of Although these two tasks are strongly related, proving lower bounds is usually harder, since in order to prove a lower bound one needs to argue about all circuits at the same time.
Note that we are interested in the formal computation of polynomials, rather than the functions that the polynomials define. For example, consider the polynomial over the field of two elements this polynomial represents the zero function, but it is not the zero polynomial. This is one of the differences between the study of arithmetic circuits and the study of Boolean circuits. In Boolean complexity, one is mostly interested in computing a function, rather than some representation of it (in our case, a representation by a polynomial). This is one of the reasons that make Boolean complexity harder than arithmetic complexity. The study of arithmetic circuits may also be considered as one of the intermediate steps towards the study of the Boolean case, which we hardly understand.
As part of the study of the complexity of computing polynomials, some clever circuits (alternatively algorithms) were found. A well-known example is Strassen's algorithm for matrix product. The straightforward way for computing the product of two matrices requires a circuit of size order Strassen showed that we can, in fact, multiply two matrices using a circuit of size roughly Strassen's basic idea is a clever way for multiplying matrices. This idea is the starting point of the best theoretical way for multiplying two matrices that takes time roughly
Another interesting story lies behind the computation of the determinant of an matrix. The naive way for computing the determinant requires circuits of size roughly Nevertheless, we know that there are circuits of size polynomial in for computing the determinant. These circuits, however, have depth that is linear in Berkowitz came up with an improvement: a circuit of size polynomial in but of depth
Hub AI
Arithmetic circuit complexity AI simulator
(@Arithmetic circuit complexity_simulator)
Arithmetic circuit complexity
In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial ?"
An arithmetic circuit over the field and the set of variables is a directed acyclic graph as follows. Every node in it with indegree zero is called an input gate and is labeled by either a variable or a field element in Every other gate is labeled by either or in the first case it is a sum gate and in the second a product gate. An arithmetic formula is a circuit in which every gate has outdegree one (and so the underlying graph is a directed tree).
A circuit has two complexity measures associated with it: size and depth. The size of a circuit is the number of gates in it, and the depth of a circuit is the length of the longest directed path in it. For example, the circuit in the figure has size six and depth two.
An arithmetic circuit computes a polynomial in the following natural way. An input gate computes the polynomial it is labeled by. A sum gate computes the sum of the polynomials computed by its children (a gate is a child of if the directed edge is in the graph). A product gate computes the product of the polynomials computed by its children. Consider the circuit in the figure, for example: the input gates compute (from left to right) and the sum gates compute and and the product gate computes
Given a polynomial we may ask ourselves what is the best way to compute it — for example, what is the smallest size of a circuit computing The answer to this question consists of two parts. The first part is finding some circuit that computes this part is usually called upper bounding the complexity of The second part is showing that no other circuit can do better; this part is called lower bounding the complexity of Although these two tasks are strongly related, proving lower bounds is usually harder, since in order to prove a lower bound one needs to argue about all circuits at the same time.
Note that we are interested in the formal computation of polynomials, rather than the functions that the polynomials define. For example, consider the polynomial over the field of two elements this polynomial represents the zero function, but it is not the zero polynomial. This is one of the differences between the study of arithmetic circuits and the study of Boolean circuits. In Boolean complexity, one is mostly interested in computing a function, rather than some representation of it (in our case, a representation by a polynomial). This is one of the reasons that make Boolean complexity harder than arithmetic complexity. The study of arithmetic circuits may also be considered as one of the intermediate steps towards the study of the Boolean case, which we hardly understand.
As part of the study of the complexity of computing polynomials, some clever circuits (alternatively algorithms) were found. A well-known example is Strassen's algorithm for matrix product. The straightforward way for computing the product of two matrices requires a circuit of size order Strassen showed that we can, in fact, multiply two matrices using a circuit of size roughly Strassen's basic idea is a clever way for multiplying matrices. This idea is the starting point of the best theoretical way for multiplying two matrices that takes time roughly
Another interesting story lies behind the computation of the determinant of an matrix. The naive way for computing the determinant requires circuits of size roughly Nevertheless, we know that there are circuits of size polynomial in for computing the determinant. These circuits, however, have depth that is linear in Berkowitz came up with an improvement: a circuit of size polynomial in but of depth