Recent from talks
Nothing was collected or created yet.
Constraint logic programming
View on WikipediaConstraint logic programming is a form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is A(X,Y) :- X+Y>0, B(X), C(Y). In this clause, X+Y>0 is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y) holds: X+Y is greater than zero and both B(X) and C(Y) are true.
As in regular logic programming, programs are queried about the provability of a goal, which itself may contain constraints in addition to literals. A proof for a goal is composed of clauses whose bodies are satisfiable constraints and literals that can in turn be proved using other clauses. Execution is performed by an interpreter, which starts from the goal and recursively scans the clauses trying to prove the goal. Constraints encountered during this scan are placed in a set called the constraint store. If this set is found out to be unsatisfiable, the interpreter backtracks, trying to use other clauses for proving the goal. In practice, satisfiability of the constraint store may be checked using an incomplete algorithm, which does not always detect inconsistency.
Overview
[edit]Formally, constraint logic programs are like regular logic programs, but the body of clauses can contain constraints, in addition to the regular logic programming literals. As an example, X>0 is a constraint, and is included in the last clause of the following constraint logic program.
B(X,1):- X<0.
B(X,Y):- X=1, Y>0.
A(X,Y):- X>0, B(X,Y).
Like in regular logic programming, evaluating a goal such as A(X,1) requires evaluating the body of the last clause with Y=1. Like in regular logic programming, this in turn requires proving the goal B(X,1). Contrary to regular logic programming, this also requires a constraint to be satisfied: X>0, the constraint in the body of the last clause. (In regular logic programming, X>0 cannot be proved unless X is bound to a fully ground term and execution of the program will fail if that is not the case.)
Whether a constraint is satisfied cannot always be determined when the constraint is encountered. In this case, for example, the value of X is not determined when the last clause is evaluated. As a result, the constraint X>0 is neither satisfied nor violated at this point. Rather than proceeding in the evaluation of B(X,1) and then checking whether the resulting value of X is positive afterwards, the interpreter stores the constraint X>0 and then proceeds in the evaluation of B(X,1); this way, the interpreter can detect violation of the constraint X>0 during the evaluation of B(X,1), and backtrack immediately if this is the case, rather than waiting for the evaluation of B(X,1) to conclude.
In general, the evaluation of a constraint logic program proceeds as does a regular logic program. However, constraints encountered during evaluation are placed in a set called a constraint store. As an example, the evaluation of the goal A(X,1) proceeds by evaluating the body of the first clause with Y=1; this evaluation adds X>0 to the constraint store and requires the goal B(X,1) to be proven. While trying to prove this goal, the first clause is applied but its evaluation adds X<0 to the constraint store. This addition makes the constraint store unsatisfiable. The interpreter then backtracks, removing the last addition from the constraint store. The evaluation of the second clause adds X=1 and Y>0 to the constraint store. Since the constraint store is satisfiable and no other literal is left to prove, the interpreter stops with the solution X=1, Y=1.
Semantics
[edit]The semantics of constraint logic programs can be defined in terms of a virtual interpreter that maintains a pair during execution. The first element of this pair is called current goal; the second element is called constraint store. The current goal contains the literals the interpreter is trying to prove and may also contain some constraints it is trying to satisfy; the constraint store contains all constraints the interpreter has assumed satisfiable so far.
Initially, the current goal is the goal and the constraint store is empty. The interpreter proceeds by removing the first element from the current goal and analyzing it. The details of this analysis are explained below, but in the end this analysis may produce a successful termination or a failure. This analysis may involve recursive calls and addition of new literals to the current goal and new constraint to the constraint store. The interpreter backtracks if a failure is generated. A successful termination is generated when the current goal is empty and the constraint store is satisfiable.
The details of the analysis of a literal removed from the goal is as follows. After having removed this literal from the front of the goal, it is checked whether it is a constraint or a literal. If it is a constraint, it is added to the constraint store. If it is a literal, a clause whose head has the same predicate as the literal is chosen; the clause is rewritten by replacing its variables with new variables (variables not occurring in the goal): the result is called a fresh variant of the clause; the body of the fresh variant of the clause is then placed at the front of the goal; the equality of each argument of the literal with the corresponding one of the fresh variant head is placed at the front of the goal as well.
Some checks are done during these operations. In particular, the constraint store is checked for consistency every time a new constraint is added to it. In principle, whenever the constraint store is unsatisfiable the algorithm could backtrack. However, checking unsatisfiability at each step would be inefficient. For this reason, an incomplete satisfiability checker may be used instead. In practice, satisfiability is checked using methods that simplify the constraint store, that is, rewrite it into an equivalent but simpler-to-solve form. These methods can sometimes but not always prove unsatisfiability of an unsatisfiable constraint store.
The interpreter has proved the goal when the current goal is empty and the constraint store is not detected unsatisfiable. The result of execution is the current set of (simplified) constraints. This set may include constraints such as that force variables to a specific value, but may also include constraints like that only bound variables without giving them a specific value.
Formally, the semantics of constraint logic programming is defined in terms of derivations. A transition is a pair of pairs goal/store, noted . Such a pair states the possibility of going from state to state . Such a transition is possible in three possible cases:
- an element of G is a constraint C, and we have and ; in other words, a constraint can be moved from the goal to the constraint store
- an element of G is a literal , there exists a clause that, rewritten using new variables, is , the set is G with replaced by , and ; in other words, a literal can be replaced by the body of a fresh variant of a clause having the same predicate in the head, adding the body of the fresh variant and the above equalities of terms to the goal
- S and are equivalent according to the specific constraint semantics
A sequence of transitions is a derivation. A goal G can be proved if there exists a derivation from to for some satisfiable constraint store S. This semantics formalizes the possible evolutions of an interpreter that arbitrarily chooses the literal of the goal to process and the clause to replace literals. In other words, a goal is proved under this semantics if there exists a sequence of choices of literals and clauses, among the possibly many ones, that lead to an empty goal and satisfiable store.
Actual interpreters process the goal elements in a LIFO order: elements are added in the front and processed from the front. They also choose the clause of the second rule according to the order in which they are written, and rewrite the constraint store when it is modified.
The third possible kind of transition is a replacement of the constraint store with an equivalent one. This replacement is limited to those done by specific methods, such as constraint propagation. The semantics of constraint logic programming is parametric not only to the kind of constraints used but also to the method for rewriting the constraint store. The specific methods used in practice replace the constraint store with one that is simpler to solve. If the constraint store is unsatisfiable, this simplification may detect this unsatisfiability sometimes, but not always.
The result of evaluating a goal against a constraint logic program is defined if the goal is proved. In this case, there exists a derivation from the initial pair to a pair where the goal is empty. The constraint store of this second pair is considered the result of the evaluation. This is because the constraint store contains all constraints assumed satisfiable to prove the goal. In other words, the goal is proved for all variable evaluations that satisfy these constraints.
The pairwise equality of the arguments of two literals is often compactly denoted by : this is a shorthand for the constraints . A common variant of the semantics for constraint logic programming adds directly to the constraint store rather than to the goal.
Terms and conditions
[edit]Different definitions of terms are used, generating different kinds of constraint logic programming: over trees, reals, or finite domains. A kind of constraint that is always present is the equality of terms. Such constraints are necessary because the interpreter adds t1=t2 to the goal whenever a literal P(...t1...) is replaced with the body of a clause fresh variant whose head is P(...t2...).
Tree terms
[edit]Constraint logic programming with tree terms emulates regular logic programming by storing substitutions as constraints in the constraint store. Terms are variables, constants, and function symbols applied to other terms. The only constraints considered are equalities and disequalities between terms. Equality is particularly important, as constraints like t1=t2 are often generated by the interpreter. Equality constraints on terms can be simplified, that is solved, via unification:
A constraint t1=t2 can be simplified if both terms are function symbols applied to other terms. If the two function symbols are the same and the number of subterms is also the same, this constraint can be replaced with the pairwise equality of subterms. If the terms are composed of different function symbols or the same functor but on different number of terms, the constraint is unsatisfiable.
If one of the two terms is a variable, the only allowed value the variable can take is the other term. As a result, the other term can replace the variable in the current goal and constraint store, thus practically removing the variable from consideration. In the particular case of equality of a variable with itself, the constraint can be removed as always satisfied.
In this form of constraint satisfaction, variable values are terms.
Reals
[edit]Constraint logic programming with real numbers uses real expressions as terms. When no function symbols are used, terms are expressions over reals, possibly including variables. In this case, each variable can only take a real number as a value.
To be precise, terms are expressions over variables and real constants. Equality between terms is a kind of constraint that is always present, as the interpreter generates equality of terms during execution. As an example, if the first literal of the current goal is A(X+1) and the interpreter has chosen a clause that is A(Y-1):-Y=1 after rewriting is variables, the constraints added to the current goal are X+1=Y-1 and . The rules of simplification used for function symbols are obviously not used: X+1=Y-1 is not unsatisfiable just because the first expression is built using + and the second using -.
Reals and function symbols can be combined, leading to terms that are expressions over reals and function symbols applied to other terms. Formally, variables and real constants are expressions, as any arithmetic operator over other expressions. Variables, constants (zero-arity-function symbols), and expressions are terms, as any function symbol applied to terms. In other words, terms are built over expressions, while expressions are built over numbers and variables. In this case, variables ranges over real numbers and terms. In other words, a variable can take a real number as a value, while another takes a term.
Equality of two terms can be simplified using the rules for tree terms if none of the two terms is a real expression. For example, if the two terms have the same function symbol and number of subterms, their equality constraint can be replaced with the equality of subterms.
Finite domains
[edit]The third class of constraints used in constraint logic programming is that of finite domains. Values of variables are in this case taken from a finite domain, often that of integer numbers. For each variable, a different domain can be specified: X::[1..5] for example means that the value of X is between 1 and 5. The domain of a variable can also be given by enumerating all values a variable can take; therefore, the above domain declaration can be also written X::[1,2,3,4,5]. This second way of specifying a domain allows for domains that are not composed of integers, such as X::[george,mary,john]. If the domain of a variable is not specified, it is assumed to be the set of integers representable in the language. A group of variables can be given the same domain using a declaration like [X,Y,Z]::[1..5].
The domain of a variable may be reduced during execution. Indeed, as the interpreter adds constraints to the constraint store, it performs constraint propagation to enforce a form of local consistency, and these operations may reduce the domain of variables. If the domain of a variable becomes empty, the constraint store is inconsistent, and the algorithm backtracks. If the domain of a variable becomes a singleton, the variable can be assigned the unique value in its domain. The forms of consistency typically enforced are arc consistency, hyper-arc consistency, and bound consistency. The current domain of a variable can be inspected using specific literals; for example, dom(X,D) finds out the current domain D of a variable X.
As for domains of reals, functors can be used with domains of integers. In this case, a term can be an expression over integers, a constant, or the application of a functor over other terms. A variable can take an arbitrary term as a value, if its domain has not been specified to be a set of integers or constants.
The constraint store
[edit]The constraint store contains the constraints that are currently assumed satisfiable. It can be considered what the current substitution is for regular logic programming. When only tree terms are allowed, the constraint store contains constraints in the form t1=t2; these constraints are simplified by unification, resulting in constraints of the form variable=term; such constraints are equivalent to a substitution.
However, the constraint store may also contain constraints in the form t1!=t2, if the difference != between terms is allowed. When constraints over reals or finite domains are allowed, the constraint store may also contain domain-specific constraints like X+2=Y/2, etc.
The constraint store extends the concept of current substitution in two ways. First, it contains not only the constraints derived from equating a literal with the head of a fresh variant of a clause, but also the constraints of the body of clauses. Second, it contains not only constraints of the form variable=value but also constraints on the considered constraint language. While the result of a successful evaluation of a regular logic program is the final substitution, the result for a constraint logic program is the final constraint store, which may contain constraints of the form variable=value but also arbitrary constraints.
Domain-specific constraints may come to the constraint store both from the body of a clauses and from equating a literal with a clause head: for example, if the interpreter rewrites the literal A(X+2) with a clause whose fresh variant head is A(Y/2), the constraint X+2=Y/2 is added to the constraint store. If a variable appears in a real or finite domain expression, it can only take a value in the reals or the finite domain. Such a variable cannot take a term made of a functor applied to other terms as a value. The constraint store is unsatisfiable if a variable is bound to take both a value of the specific domain and a functor applied to terms.
After a constraint is added to the constraint store, some operations are performed on the constraint store. Which operations are performed depends on the considered domain and constraints. For example, unification is used for finite tree equalities, variable elimination for polynomial equations over reals, constraint propagation to enforce a form of local consistency for finite domains. These operations are aimed at making the constraint store simpler to be checked for satisfiability and solved.
As a result of these operations, the addition of new constraints may change the old ones. It is essential that the interpreter is able to undo these changes when it backtracks. The simplest case method is for the interpreter to save the complete state of the store every time it makes a choice (it chooses a clause to rewrite a goal). More efficient methods for allowing the constraint store to return to a previous state exist. In particular, one may just save the changes to the constraint store made between two points of choice, including the changes made to the old constraints. This can be done by simply saving the old value of the constraints that have been modified; this method is called trailing. A more advanced method is to save the changes that have been done on the modified constraints. For example, a linear constraint is changed by modifying its coefficient: saving the difference between the old and new coefficient allows reverting a change. This second method is called semantic backtracking, because the semantics of the change is saved rather than the old version of the constraints only.
Labeling
[edit]The labeling literals are used on variables over finite domains to check satisfiability or partial satisfiability of the constraint store and to find a satisfying assignment. A labeling literal is of the form labeling([variables]), where the argument is a list of variables over finite domains. Whenever the interpreter evaluates such a literal, it performs a search over the domains of the variables of the list to find an assignment that satisfies all relevant constraints. Typically, this is done by a form of backtracking: variables are evaluated in order, trying all possible values for each of them, and backtracking when inconsistency is detected.
The first use of the labeling literal is to actual check satisfiability or partial satisfiability of the constraint store. When the interpreter adds a constraint to the constraint store, it only enforces a form of local consistency on it. This operation may not detect inconsistency even if the constraint store is unsatisfiable. A labeling literal over a set of variables enforces a satisfiability check of the constraints over these variables. As a result, using all variables mentioned in the constraint store results in checking satisfiability of the store.
The second use of the labeling literal is to actually determine an evaluation of the variables that satisfies the constraint store. Without the labeling literal, variables are assigned values only when the constraint store contains a constraint of the form X=value and when local consistency reduces the domain of a variable to a single value. A labeling literal over some variables forces these variables to be evaluated. In other words, after the labeling literal has been considered, all variables are assigned a value.
Typically, constraint logic programs are written in such a way labeling literals are evaluated only after as many constraints as possible have been accumulated in the constraint store. This is because labeling literals enforce search, and search is more efficient if there are more constraints to be satisfied. A constraint satisfaction problem is typical solved by a constraint logic program having the following structure:
solve(X):-constraints(X), labeling(X)
constraints(X):- (all constraints of the CSP)
When the interpreter evaluates the goal solve(args), it places the body of a fresh variant of the first clause in the current goal. Since the first goal is constraints(X'), the second clause is evaluated, and this operation moves all constraints in the current goal and eventually in the constraint store. The literal labeling(X') is then evaluated, forcing a search for a solution of the constraint store. Since the constraint store contains exactly the constraints of the original constraint satisfaction problem, this operation searches for a solution of the original problem.
Program reformulations
[edit]A given constraint logic program may be reformulated to improve its efficiency. A first rule is that labeling literals should be placed after as much constraints on the labeled literals are accumulated in the constraint store. While in theory A(X):-labeling(X),X>0 is equivalent to A(X):-X>0,labeling(X), the search that is performed when the interpreter encounters the labeling literal is on a constraint store that does not contain the constraint X>0. As a result, it may generate solutions, such as X=-1, that are later found out not to satisfy this constraint. On the other hand, in the second formulation the search is performed only when the constraint is already in the constraint store. As a result, search only returns solutions that are consistent with it, taking advantage of the fact that additional constraints reduce the search space.
A second reformulation that can increase efficiency is to place constraints before literals in the body of clauses. Again, A(X):-B(X),X>0 and A(X):-X>0,B(X) are in principle equivalent. However, the first may require more computation. For example, if the constraint store contains the constraint X<-2, the interpreter recursively evaluates B(X) in the first case; if it succeeds, it then finds out that the constraint store is inconsistent when adding X>0. In the second case, when evaluating that clause, the interpreter first adds X>0 to the constraint store and then possibly evaluates B(X). Since the constraint store after the addition of X>0 turns out to be inconsistent, the recursive evaluation of B(X) is not performed at all.
A third reformulation that can increase efficiency is the addition of redundant constraints. If the programmer knows (by whatever means) that the solution of a problem satisfies a specific constraint, they can include that constraint to cause inconsistency of the constraint store as soon as possible. For example, if it is known beforehand that the evaluation of B(X) will result in a positive value for X, the programmer may add X>0 before any occurrence of B(X). As an example, A(X,Y):-B(X),C(X) will fail on the goal A(-2,Z), but this is only found out during the evaluation of the subgoal B(X). On the other hand, if the above clause is replaced by A(X,Y):-X>0,A(X),B(X), the interpreter backtracks as soon as the constraint X>0 is added to the constraint store, which happens before the evaluation of B(X) even starts.
Constraint handling rules
[edit]Constraint handling rules were initially defined as a stand-alone formalism for specifying constraint solvers, and were later embedded in logic programming. There are two kinds of constraint handling rules. The rules of the first kind specify that, under a given condition, a set of constraints is equivalent to another one. The rules of the second kind specify that, under a given condition, a set of constraints implies another one. In a constraint logic programming language supporting constraint handling rules, a programmer can use these rules to specify possible rewritings of the constraint store and possible additions of constraints to it. The following are example rules:
A(X) <=> B(X) | C(X) A(X) ==> B(X) | C(X)
The first rule tells that, if B(X) is entailed by the store, the constraint A(X) can be rewritten as C(X). As an example, N*X>0 can be rewritten as X>0 if the store implies that N>0. The symbol <=> resembles equivalence in logic, and tells that the first constraint is equivalent to the latter. In practice, this implies that the first constraint can be replaced with the latter.
The second rule instead specifies that the latter constraint is a consequence of the first, if the constraint in the middle is entailed by the constraint store. As a result, if A(X) is in the constraint store and B(X) is entailed by the constraint store, then C(X) can be added to the store. Differently from the case of equivalence, this is an addition and not a replacement: the new constraint is added but the old one remains.
Equivalence allows for simplifying the constraint store by replacing some constraints with simpler ones; in particular, if the third constraint in an equivalence rule is true, and the second constraint is entailed, the first constraint is removed from the constraint store. Inference allows for the addition of new constraints, which may lead to proving inconsistency of the constraint store, and may generally reduce the amount of search needed to establish its satisfiability.
Logic programming clauses in conjunction with constraint handling rules can be used to specify a method for establishing the satisfiability of the constraint store. Different clauses are used to implement the different choices of the method; the constraint handling rules are used for rewriting the constraint store during execution. As an example, one can implement backtracking with unit propagation this way. Let holds(L) represents a propositional clause, in which the literals in the list L are in the same order as they are evaluated. The algorithm can be implemented using clauses for the choice of assigning a literal to true or false, and constraint handling rules to specify propagation. These rules specify that holds([l|L]) can be removed if l=true follows from the store, and it can be rewritten as holds(L) if l=false follows from the store. Similarly, holds([l]) can be replaced by l=true. In this
example, the choice of value for a variable is implemented using clauses of logic programming; however, it can be encoded in constraint handling rules using an extension called disjunctive constraint handling rules or CHR∨.
Bottom-up evaluation
[edit]The standard strategy of evaluation of logic programs is top-down and depth-first: from the goal, a number of clauses are identified as being possibly able to prove the goal, and recursion over the literals of their bodies is performed. An alternative strategy is to start from the facts and use clauses to derive new facts; this strategy is called bottom-up. It is considered better than the top-down one when the aim is that of producing all consequences of a given program, rather than proving a single goal. In particular, finding all consequences of a program in the standard top-down and depth-first manner may not terminate while the bottom-up evaluation strategy terminates.
The bottom-up evaluation strategy maintains the set of facts proved so far during evaluation. This set is initially empty. With each step, new facts are derived by applying a program clause to the existing facts, and are added to the set. For example, the bottom up evaluation of the following program requires two steps:
A(q).
B(X):-A(X).
The set of consequences is initially empty. At the first step, A(q) is the only clause whose body can be proved (because it is empty), and A(q) is therefore added to the current set of consequences. At the second step, since A(q) is proved, the second clause can be used and B(q) is added to the consequences. Since no other consequence can be proved from {A(q),B(q)}, execution terminates.
The advantage of the bottom-up evaluation over the top-down one is that cycles of derivations do not produce an infinite loop. This is because adding a consequence to the current set of consequences that already contains it has no effect. As an example, adding a third clause to the above program generates a cycle of derivations in the top-down evaluation:
A(q).
B(X):-A(X).
A(X):-B(X).
For example, while evaluating all answers to the goal A(X), the top-down strategy would produce the following derivations:
A(q)
A(q):-B(q), B(q):-A(q), A(q)
A(q):-B(q), B(q):-A(q), A(q):-B(q), B(q):-A(q), A(q)
In other words, the only consequence A(q) is produced first, but then the algorithm cycles over derivations that do not produce any other answer. More generally, the top-down evaluation strategy may cycle over possible derivations, possibly when other ones exist.
The bottom-up strategy does not have the same drawback, as consequences that were already derived has no effect. On the above program, the bottom-up strategy starts adding A(q) to the set of consequences; in the second step, B(X):-A(X) is used to derive B(q); in the third step, the only facts that can be derived from the current consequences are A(q) and B(q), which are however already in the set of consequences. As a result, the algorithm stops.
In the above example, the only used facts were ground literals. In general, every clause that only contains constraints in the body is considered a fact. For example, a clause A(X):-X>0,X<10 is considered a fact as well. For this extended definition of facts, some facts may be equivalent while not syntactically equal. For example, A(q) is equivalent to A(X):-X=q and both are equivalent to A(X):-X=Y, Y=q. To solve this problem, facts are translated into a normal form in which the head contains a tuple of all-different variables; two facts are then equivalent if their bodies are equivalent on the variables of the head, that is, their sets of solutions are the same when restricted to these variables.
As described, the bottom-up approach has the advantage of not considering consequences that have already been derived. However, it still may derive consequences that are entailed by those already derived while not being equal to any of them. As an example, the bottom up evaluation of the following program is infinite:
A(0).
A(X):-X>0.
A(X):-X=Y+1, A(Y).
The bottom-up evaluation algorithm first derives that A(X) is true for X=0 and X>0. In the second step, the first fact with the third clause allows for the derivation of A(1). In the third step, A(2) is derived, etc. However, these facts are already entailed by the fact that A(X) is true for any nonnegative X. This drawback can be overcome by checking for entailment facts that are to be added to the current set of consequences. If the new consequence is already entailed by the set, it is not added to it. Since facts are stored as clauses, possibly with "local variables", entailment is restricted over the variables of their heads.
Concurrent constraint logic programming
[edit]The concurrent versions of constraint logic programming are aimed at programming concurrent processes rather than solving constraint satisfaction problems. Goals in constraint logic programming are evaluated concurrently; a concurrent process is therefore programmed as the evaluation of a goal by the interpreter.
Syntactically, concurrent constraints logic programs are similar to non-concurrent programs, the only exception being that clauses includes guards, which are constraints that may block the applicability of the clause under some conditions. Semantically, concurrent constraint logic programming differs from its non-concurrent versions because a goal evaluation is intended to realize a concurrent process rather than finding a solution to a problem. Most notably, this difference affects how the interpreter behaves when more than one clause is applicable: non-concurrent constraint logic programming recursively tries all clauses; concurrent constraint logic programming chooses only one. This is the most evident effect of an intended directionality of the interpreter, which never revises a choice it has previously taken. Other effects of this are the semantical possibility of having a goal that cannot be proved while the whole evaluation does not fail, and a particular way for equating a goal and a clause head.
Applications
[edit]Constraint logic programming has been applied to a number of fields, such as automated scheduling,[1] type inference,[2] civil engineering, mechanical engineering, digital circuit verification, air traffic control, finance, and others.[citation needed]
History
[edit]Constraint logic programming was introduced by Jaffar and Lassez in 1987.[3] They generalized the observation that the term equations and disequations of Prolog II were a specific form of constraints, and generalized this idea to arbitrary constraint languages. The first implementations of this concept were Prolog III, CLP(R), and CHIP.[citation needed]
See also
[edit]References
[edit]- Dechter, Rina (2003). Constraint processing. Morgan Kaufmann. ISBN 1-55860-890-7. ISBN 1-55860-890-7
- Apt, Krzysztof (2003). Principles of constraint programming. Cambridge University Press. ISBN 0-521-82583-0. ISBN 0-521-82583-0
- Marriott, Kim; Peter J. Stuckey (1998). Programming with constraints: An introduction. MIT Press. ISBN 0-262-13341-5
- Frühwirth, Thom; Slim Abdennadher (2003). Essentials of constraint programming. Springer. ISBN 3-540-67623-6. ISBN 3-540-67623-6
- Jaffar, Joxan; Michael J. Maher (1994). "Constraint logic programming: a survey". Journal of Logic Programming. 19/20: 503–581. doi:10.1016/0743-1066(94)90033-7.
- Colmerauer, Alain (1987). "Opening the Prolog III Universe". Byte. August.
References
[edit]- ^ Abdennadher, Slim, and Hans Schlenker. "Nurse scheduling using constraint logic programming." AAAI/IAAI. 1999.
- ^ Michaylov, Spiro, and Frank Pfenning. "Higher-Order Logic Programming as Constraint Logic Programming." PPCP. Vol. 93. 1993.
- ^ Jaffar, Joxan, and J-L. Lassez. "Constraint logic programming." Proceedings of the 14th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. ACM, 1987.
Constraint logic programming
View on GrokipediaIntroduction
Definition and Overview
Constraint logic programming (CLP) is a declarative programming paradigm that integrates the logical inference mechanisms of logic programming with constraint-solving techniques over specified computational domains, where traditional unification is replaced by the imposition and propagation of constraints during execution.[6] This approach allows programmers to specify relations and restrictions on variables in a high-level, mathematical manner, enabling the system to automatically narrow down possible solutions through constraint propagation and search.[4] The core benefits of CLP include its ability to provide declarative specifications of complex problems, facilitating automatic constraint propagation that prunes inconsistent solutions early, and supporting efficient solving of NP-hard combinatorial problems such as scheduling and resource allocation.[7] By leveraging domain-specific solvers, CLP achieves higher expressiveness and often superior performance compared to pure logic programming for tasks involving numerical or relational constraints, while maintaining the intuitive rule-based structure of logic programs.[4] A basic example of a CLP clause illustrates this:send_package(X, Y) :- X #>= 10, Y #= X + 5., where #>= and #= denote constraints over finite domains, ensuring X is at least 10 and Y is exactly 5 more than X without immediate binding.[8] This clause can be queried to find valid bindings, such as X=10 implying Y=15, demonstrating how constraints defer decisions until necessary.
CLP extends Prolog by incorporating dedicated constraint solvers that operate over domains like integers or reals, allowing constraints to accumulate in a "store" and propagate incrementally rather than relying solely on term unification for variable binding.[8] This integration preserves Prolog's syntax and backtracking search while enhancing its capability to handle quantitative reasoning declaratively.
Relation to Other Paradigms
Constraint logic programming (CLP) extends pure logic programming paradigms, such as Prolog, by generalizing the unification mechanism to encompass broader constraint satisfaction over structured domains. In Prolog, unification enforces equality constraints solely over the algebra of terms, restricting computations to symbolic matching and logical inference.[4] CLP, in contrast, incorporates constraints from domains like finite integers or real numbers, allowing for numerical solving and relational propagation that surpass Prolog's capabilities, as exemplified by global satisfiability checks that prevent infinite loops in arithmetic queries.[6] This generalization maintains the declarative style of logic programming while enhancing expressiveness for problems involving quantitative relations.[4] CLP distinguishes itself from traditional constraint programming (CP) by integrating constraint solving within a full logic programming framework that includes Horn clauses, backtracking, and non-deterministic choice. Pure CP emphasizes constraint propagation and local search to reduce variable domains, often without rule-based inference or control flow.[4] In CLP, constraints are dynamically generated and tested during execution, embedding CP techniques like propagation into a logic-based search strategy that supports complex rule compositions.[6] This merger enables CLP to handle both declarative specifications and efficient solving, unlike CP's narrower focus on static constraint networks.[4] Relative to answer set programming (ASP), CLP prioritizes incremental constraint propagation and labeling over finite domains, facilitating step-by-step refinement of solutions in search trees. ASP, grounded in stable model semantics, employs a generate-and-test approach via DPLL-based solvers to compute multiple equilibrium models, excelling in non-monotonic reasoning but lacking CLP's built-in support for arithmetic constraints without extensions.[9] CLP's domains, such as finite sets, enable direct handling of combinatorial structures through propagation, differing from ASP's reliance on rule encodings for similar tasks.[9] In comparison to satisfiability modulo theories (SMT), CLP supports flexible, incremental solving across heterogeneous domains like integers and reals, integrated with logic rules, whereas SMT focuses on decidable satisfiability checks within specific theories using combined propositional and theory solvers. CLP's non-deterministic backtracking allows exploration of solution spaces beyond SMT's emphasis on complete, theory-tailored verification, though both paradigms leverage constraint-like formulas for optimization. CLP thus bridges the declarative power of logic programming with the optimization strengths of constraint solving, providing a unified approach for tackling combinatorial problems in scheduling and configuration that demand both relational reasoning and numerical efficiency.[6]Syntax and Semantics
Syntax of CLP Programs
Constraint logic programming (CLP) extends the syntax of traditional logic programming languages, such as Prolog, by incorporating constraints directly into program clauses. A CLP program consists of a set of definite clauses of the form , where is an atomic formula (the head) serving as a Herbrand term, and is a conjunction of goals that may include both ordinary atomic formulas and constraints. This structure allows constraints to express relations over variables in a declarative manner, generalizing unification to domain-specific solving.[4] In the body of a clause, goals can be atomic predicates , where is a user-defined predicate and are terms built from variables, constants, and function symbols in the Herbrand universe. Constraints appear as special literals , where is a predicate from the underlying constraint domain , and the are terms whose variables are constrained. Existential quantification can be explicit, as in , allowing variables to be locally scoped within constraints or atoms. Facts take the form , where is a constraint with no atoms in the body.[4] Constraint literals are atomic constraints over variables, typically primitive relations from the domain , such as equalities, inequalities, or domain restrictions. For example, in a linear arithmetic domain, a constraint might be , while more complex constraints can combine primitives via conjunctions. Variables are denoted by lowercase identifiers (e.g., ), and terms follow standard Herbrand syntax, but constraints extend this with domain semantics, avoiding the need for explicit unification in many cases.[4] Different CLP systems adopt specific notations for constraints, particularly in implementations like CLP() for finite domains. In SICStus Prolog and similar systems, arithmetic constraints use infix operators such as for equality (e.g., ), for strict inequality (e.g., ), and domain specifications like to restrict to integers from 1 to 10. Nonlinear constraints are also supported, such as for product equality over finite domains. These notations integrate seamlessly into clause bodies, as in the example clausesumto(N, S) :- N #>= 1, N #<= S, N1 #= N - 1, S1 #= S - N, sumto(N1, S1)., which defines a summation relation with finite domain constraints. In contrast, generic CLP syntax, as in the original scheme, uses predicate notation like for inequalities, without domain-specific operators.[10][11][4]
Operational Semantics
Constraint logic programming (CLP) adapts the standard SLD-resolution mechanism from logic programming to incorporate constraint solving, where resolution steps involve not only unification but also constraint manipulation and entailment checks. In this framework, a computation state is typically represented as a tuple (A, C, S), where A is a multiset of atoms and constraints representing the current goals, C denotes the active (solved) constraints, and S represents the passive (simplified but unsolved) constraints accumulated so far. During resolution, an atom in A is selected and rewritten using a program clause, potentially introducing new constraints that are added to C or S; entailment checks ensure that added constraints are consistent with the existing store by verifying whether the new constraints are implied by the current ones. This adaptation allows CLP to handle constraints over various domains while preserving the goal-directed, depth-first search nature of SLD-derivations.[1][4] The execution of a CLP program proceeds in distinct stages: constraint posting, simplification, and inconsistency detection. Computation begins with an initial state (G, ∅, ∅), where G is the goal multiset and empty constraint stores; as resolution unfolds, constraints generated from clause heads or bodies are posted to the store via addition transitions, which append them to C or S. Simplification then occurs through inference transitions that apply domain-specific operations to narrow the constraints, such as reducing redundant expressions or projecting variables, while consistency checks via selection transitions test for entailment and detect failure if a constraint leads to inconsistency in the domain (e.g., empty solution set). A derivation succeeds if it reaches a state (∅, C, S) with no remaining goals, fails upon inconsistency, or flounders if non-ground goals cannot proceed without further instantiation. These stages enable incremental solving, where the constraint store evolves monotonically.[1][4] The operational semantics of constraint solving in CLP is characterized by a fixpoint semantics, where the constraint store is iteratively narrowed until it reaches a fixpoint or an inconsistency is detected. This process relies on a one-step consequence function T_P for the program logic and a simplification operator S_P for the constraints, with the least fixpoint of S_P applied to the initial empty store providing the semantic foundation for computed answers. Iterations continue by applying domain solvers to refine S until no further simplification is possible, ensuring that the final store represents the most precise approximation of solutions consistent with the program and goal. This fixpoint approach guarantees termination in finite domains or under bounded propagation, aligning operational behavior with declarative intent.[1][4] Handling of non-ground goals in CLP involves suspending constraint evaluation until variables are sufficiently instantiated, preventing premature failure or inefficiency. When a constraint involves unbound variables, it is added to the passive store S and suspended; resolution proceeds on other goals, and the constraint is reactivated (or "awakened") only when the store entails a guard condition, such as grounding of relevant variables via subsequent inferences. This suspension mechanism, often implemented through coroutining primitives, allows derivations to continue despite partial information, with the final answer for free variables in the goal being the conjunction of active and passive constraints projected onto those variables. Such deferred evaluation is crucial for scalability in domains like finite sets or rationals, where full grounding may be computationally expensive.[1][12]Declarative Semantics
Constraint logic programming (CLP) provides a declarative semantics grounded in model theory, where programs are viewed as logical theories over constraint structures. A CLP program over a constraint system , denoted CLP(), consists of definite clauses possibly containing constraints from the language of . The declarative meaning of is captured by its theory , which is the set of universal closures of the clauses in . Interpretations are defined over the Herbrand universe of the program, with the constraint predicates interpreted according to the structure , which provides a satisfaction relation for constraints. An -model of is a structure where all formulas in are true, and solutions to goals correspond to valuations that satisfy the accumulated constraints in such models.[4] The success set semantics formalizes the computed solutions of a CLP program independently of execution order. For a program and goal , the success set under interpretation is the set of all substitutions such that there exists a constraint where the goal succeeds with answer , and . This set characterizes the logical solutions as bindings that entail a satisfiable final constraint store, ensuring completeness with respect to the models of the program. Equivalently, coincides with the set of substitutions whose range is contained in the least -model of .[4] CLP programs are declaratively equivalent to constraint satisfaction problems (CSPs) augmented with logical structure. Specifically, a CLP() program defines a CSP over the Herbrand universe, where variables range over that universe, domains are implicitly infinite, and constraints are drawn from or generated by the clauses. Satisfaction in this CSP is determined by the structure , reducing goal solving to finding assignments that satisfy the resulting constraint network while respecting the logical implications of the program. This equivalence highlights CLP's role in generalizing traditional CSPs to include relational constraints via logic programming rules.[4] Observational equivalence between CLP programs is defined in terms of matching success sets. Two programs and are observationally equivalent if, for every goal , under the same interpretation . This notion ensures that equivalent programs yield the same set of logical solutions, independent of syntactic differences or implementation details, facilitating program transformation and optimization while preserving declarative meaning.[4]Constraint Domains
Finite Domain Constraints
Finite domain constraints in constraint logic programming (CLP) operate over variables ranging from finite sets of discrete values, typically integers represented as domains for each variable , where failure occurs if any domain becomes empty.[13] These constraints encompass basic arithmetic relations such as equality (), strict inequality (), and domain membership (), alongside global constraints that capture complex combinatorial structures.[13] Notable global constraints include alldifferent, which requires all variables in a list to take distinct values, enabling efficient propagation for permutation problems, and cumulative, which limits the total resource consumption of overlapping tasks at any time point in scheduling applications.[14][15] Solving finite domain constraints emphasizes constraint propagation to prune impossible values from domains, with arc consistency serving as a foundational technique.[16] Arc consistency ensures that for every binary constraint between variables and , and for each value in , there exists at least one compatible value in .[17] The AC-3 algorithm enforces this by maintaining a queue of constraint arcs and iteratively revising domains: for an arc , it removes any value from lacking support in , then requeues incoming arcs to the revised variable to propagate changes.[17] Stronger consistencies, such as path consistency (checking triples of variables) or bounds consistency (focusing on domain endpoints), offer greater pruning for specific constraint types but increase computational overhead.[16] In CLP(FD) systems, propagation centers on domain reduction via specialized propagators—monotonic functions that refine domains upon constraint posting or variable updates.[16] These propagators trigger on domain events (e.g., bound changes or value fixes), computing the fixpoint of successive reductions to eliminate infeasible values while preserving solutions.[16] For instance, consider variables and with initial domains and the constraint . Propagation removes 1 from (no ) and 3 from (no ), yielding and , as these are the only pairs satisfying the relation.[13] Global constraints like alldifferent and cumulative leverage dedicated propagators for efficient filtering beyond pairwise checks.[16] The alldifferent propagator, for example, achieves generalized arc consistency in time (where is the number of variables and the maximum domain size) by matching values to variables and pruning unsupported ones.[14] Similarly, cumulative propagators adjust task start times and durations to respect resource limits, using time-line or edge-finding techniques to detect overloads early.[15] This domain-centric approach distinguishes CLP(FD) propagation, focusing on iterative refinement to narrow search spaces for combinatorial problems.[16]Linear Constraints over Reals
Linear constraints over the reals in constraint logic programming (CLP) involve linear equalities and inequalities of the form , where , the are variables ranging over the reals, and is one of or .[18] These constraints are solved incrementally within the CLP framework, typically over rational numbers for exactness or floating-point representations for efficiency, allowing the constraint store to maintain a set of linear relations that are solved on-the-fly during program execution. The CLP() language exemplifies this domain, where constraints are posted and solved without immediate variable instantiation, enabling declarative modeling of continuous optimization problems.[18] Solving these constraints relies on integrating linear programming techniques, particularly a dynamic variant of the revised simplex method adapted for incremental constraint addition and removal in CLP systems. In CLP(), the solver maintains a canonical representation of the constraints as a set of equalities in solved form, using the simplex algorithm to detect satisfiability, project bounds on variables, and detect implicit equalities or redundancies.[18] This integration handles both feasibility checking for sets of inequalities and optimization objectives, such as minimizing a linear function subject to the constraints, by pivoting through the feasible region defined by the polyhedron. For instance, nonlinear constraints are delayed until they linearize through variable bindings, ensuring the solver focuses on linear arithmetic.[18] A representative example is optimizing the objective subject to the constraints , , and . In a CLP() program, posting these constraints and invoking the minimizer yields the solution , , with objective value , as the simplex solver identifies the optimal vertex of the feasible region. Implementations face challenges with floating-point precision, where accumulated rounding errors can lead to incorrect satisfiability decisions or bound projections, necessitating techniques like interval arithmetic or exact rational arithmetic to mitigate inaccuracies.[19] Additionally, while satisfiability of linear constraints over the reals is decidable via methods like Fourier-Motzkin elimination or simplex-based procedures, extending to nonlinear constraints renders the problem undecidable in general, limiting CLP systems to linear cases for termination guarantees.[20]Other Specialized Domains
In constraint logic programming, tree domains enable constraints on structured data represented as trees, where variables denote subtrees and equations enforce structural relationships. Tree terms are constructed using functor symbols and constants, allowing constraints such as equality or disequality to specify differences in term structure.[21] Solving these involves unification, which substitutes variables to make terms identical while handling occurs-checks to avoid cycles, as in the failure of .[21] For more expressive structures, feature tree constraints extend this by incorporating attribute-value pairs with associativity and commutativity (AC) theories, enabling efficient entailment checking and narrowing-based solving for complex symbolic reasoning. An example is the constraint , which ensures has functor with subterms and , propagating structural information without full instantiation.[21] These domains are particularly useful for natural language processing and knowledge representation, where hierarchical data requires partial matching.[21] Boolean domains in CLP treat variables as taking values in {0,1}, supporting SAT-like constraints such as conjunction, disjunction, and implication through dedicated solvers like CLP(B). These systems use propagation rules to achieve local consistency, decomposing high-level constraints into primitives like for , reducing search space efficiently.[22] For instance, the constraint propagates bounds via rules ensuring and , integrated into the WAM with minimal instructions for speedups of up to 10 times over earlier boolean solvers.[23] CLP(B) implementations often outperform specialized SAT tools on combinatorial problems by leveraging logic programming's declarative style, though they emphasize algebraic propagation over full clause learning.[22] Interval domains address nonlinear constraints over reals by representing variable domains as closed intervals and using interval arithmetic for propagation, avoiding floating-point errors through enclosure guarantees. In systems like CLP(Intervals), nonlinear equations such as are solved by narrowing intervals via box-consistency, which prunes domains more aggressively than arc-consistency while remaining computationally feasible.[24] Branch-and-bound search then bisects intervals, selecting variables by heuristics like greatest reduced range to find all solutions or optimize objectives.[24] For example, in Newton, a nonlinear solver, the constraint over intervals uses interval Newton methods to refine bounds iteratively, outperforming pure continuation techniques in optimization tasks.[25] These approaches excel in applications like computational geometry, ensuring certified enclosures for roots of polynomials.[25]Execution Model
The Constraint Store
In constraint logic programming (CLP), the constraint store is defined as a conjunction of constraints over a set of variables, which represents the partial information accumulated about potential solutions during program execution.[4] This store serves as the central data structure that maintains the current state of constraints, ensuring that all posted constraints are consistent and imply the feasible region for variable assignments.[6] As computation proceeds, the store evolves by incorporating new constraints while preserving logical entailment to the previous state.[26] The constraint store is represented as a logical formula in the theory of the underlying constraint domain, such as a conjunction of atomic constraints or inequalities. For instance, in the domain of real numbers, it might take the form of a set of linear inequalities like , which defines a polyhedron in the variable space.[4] This representation allows the store to be manipulated using domain-specific solvers, but it remains abstract enough to apply across different domains like finite sets or rationals.[6] Key operations on the constraint store include entailment, simplification, and projection. Entailment checks whether the current store logically implies a new constraint , denoted , which determines if adds no new information and can be safely ignored or used in conditional execution.[4] Simplification updates the store upon adding by computing a reduced form , where is logically equivalent to but more concise, often through variable elimination or redundancy removal.[26] Projection involves existential quantification to eliminate auxiliary variables, yielding for solution variables , which produces the final answer constraints upon successful termination.[4] A simple example illustrates the store's evolution: the initial store is empty (), representing no constraints. Posting the constraint updates it to . Further posting yields , where the store now conjunctively encodes both relations without immediate simplification unless domain propagation is applied.[6] This process ensures that the store monotonically accumulates information toward solution discovery.[26]Constraint Propagation
Constraint propagation is a core inference mechanism in constraint logic programming (CLP) that reduces the domains of variables in the constraint store by deriving and applying new constraints implied by the existing ones, thereby pruning inconsistent values and detecting infeasibilities early.[4] This process transforms passive constraints into active ones, tightening the overall search space without instantiating variables.[4] For instance, given constraints and , propagation infers , eliminating values from 's domain that violate this bound.[27] Key techniques for constraint propagation include forward checking, arc consistency, and domain-specific propagators. Forward checking removes values from the domains of unassigned variables that are inconsistent with the current partial assignment, ensuring arc consistency on constraints between assigned and future variables; it operates in time per call, where is the number of constraints and the domain size.[27] Arc consistency enforces that every value in a variable's domain has at least one supporting value in the domains of all variables connected by binary constraints, using algorithms like AC-3 that revise arcs via a queue until no further changes occur, with worst-case complexity .[27] Domain-specific propagators tailor propagation to particular constraints; for example, the all-different propagator ensures a set of variables takes pairwise distinct values by modeling the problem as a bipartite matching and pruning unsupported values, achieving generalized arc consistency in time for variables using algorithms based on maximum matching.[28] To manage efficiency, propagators are scheduled using wake-up mechanisms that trigger re-execution only when relevant events occur, such as domain reductions or bound changes on involved variables.[4] These mechanisms employ event queues to process updates like value removals or instantiations, avoiding redundant computations by propagating changes incrementally across the constraint network.[27] Propagation terminates when the constraint store reaches a fixpoint, where no further domain reductions are possible, guaranteed by the monotonic decrease of finite domains or the absence of changes in the propagation queue.[27] In CLP systems with finite domains, this ensures halting, as each propagation step strictly reduces the possible solution space until stability or inconsistency is detected.[4]Labeling and Search Strategies
In constraint logic programming (CLP), labeling refers to the process of assigning concrete values to unbound variables from their remaining domains after constraint propagation has reduced the possible choices, thereby instantiating the solution or detecting inconsistency.[29] This phase is essential for generating explicit solutions, as propagation alone may leave variables with multiple feasible values without fully resolving the problem.[30] Typically implemented via predicates likeindomain in systems such as CHIP or SICStus Prolog, labeling proceeds by selecting a value, adding the corresponding equality constraint to the store, and propagating its effects; if propagation succeeds, it continues recursively, but failure triggers backtracking.[29] For instance, given a variable with domain after propagation, labeling might first try , propagate, and if that leads to inconsistency, backtrack to try .[30]
Search in CLP employs a depth-first traversal of the search tree, where choice points arise from non-deterministic labeling decisions, and chronological backtracking restores the constraint store to a previous state upon failure.[4] The search tree branches at each labeling step based on the variable's domain size, with leaves representing complete assignments or failures; this backtracking mechanism, inherited from logic programming, ensures systematic exploration but can be inefficient for large domains without optimization.[29] To mitigate exponential growth, advanced techniques like backjumping—skipping to more relevant ancestors on failure—and backmarking—pruning redundant branches—are integrated, reducing thrashing in constraint-heavy problems.[30]
Heuristics guide efficient search by prioritizing variable and value selections that minimize the explored tree size. Variable ordering often follows the "fail-first" principle, selecting the most constrained variable (e.g., smallest domain or most constraints) to fail early and prune invalid paths, as in N-Queens puzzles where row variables are ordered dynamically.[30][29] Value ordering uses "succeed-first" strategies, preferring values that maximize supports or minimize domain reductions in affected variables, such as trying central values in SEND+MORE=MONEY cryptarithms to reduce conflicts.[30] Additional heuristics include "most constraining" (prioritizing variables impacting others most) and "maximum regret" (choosing variables with the largest difference between best and worst outcomes), with restarts periodically resampling to escape local optima in optimization tasks.[29] These, often implemented via options in labeling predicates, can dramatically improve performance, as seen in reducing search attempts from 19 to 2 in simple consistency maintenance examples.[30]
Programming Techniques
Program Reformulations
Program reformulations in constraint logic programming (CLP) involve static transformations of programs that enhance efficiency by optimizing constraint propagation and search order, while preserving the program's observational semantics—meaning the set of solutions and their constraints remain unchanged. These techniques adapt methods from traditional logic programming, such as unfolding and magic sets, to the constraint-based execution model of CLP, enabling earlier detection of inconsistencies and reduced search spaces. By rewriting rules or constraints, reformulations ensure that propagation occurs in a more goal-directed manner, particularly beneficial for large-scale problems in domains like scheduling and optimization.[31] Unfolding and magic sets represent key adaptations of bottom-up optimizations to CLP. Unfolding replaces a procedure call in a rule body with the definition of that procedure, potentially simplifying the program and allowing constraints to propagate earlier during execution; for instance, in a recursive definition of task precedences, unfolding can inline resource constraints to prune infeasible partial assignments sooner. Magic sets, extended via ground magic-sets transformations, incorporate constraint conditions (e.g., inequalities like ) into the rewriting process using adornments such as "bcf" (bound-condition-free) to guide sideways information passing, thereby restricting the computation to relevant constraint tuples and improving propagation order in bottom-up evaluation of CLP programs. These transformations maintain range-restrictedness and equivalence to the original program, as proven through fixpoint semantics.[31][32] Constraint reformulation often rewrites complex global constraints into combinations of primitive ones to leverage efficient built-in solvers. For example, the global alldifferent constraint, which requires all variables in a set to take distinct values, can be decomposed into a set of binary inequality constraints for all , achieving arc consistency through forward checking or matching algorithms, though this may weaken global propagation compared to dedicated implementations. In a scheduling context, consider a job-shop problem with tasks on machines, where precedences are defined recursively as and resources ensure non-overlap ; reformulating by unfolding the precedence rules to post resource disjunctions earlier propagates machine availability limits during initial labeling, reducing the branching factor and enabling earlier failure detection in branch-and-bound search. Such reformulations preserve observational equivalence, as the transformed program's computed answers match those of the original under the same query.[28][4][31] Constraint handling rules provide a complementary approach for reformulating constraints dynamically during execution.[33]Constraint Handling Rules
Constraint Handling Rules (CHR) is a declarative, committed-choice, rule-based programming language designed for specifying custom constraint propagators in constraint logic programming (CLP) systems. Introduced by Thom Frühwirth in 1991, CHR allows programmers to define transformation rules that operate on multisets of constraints, enabling the implementation of domain-independent propagation logic that complements the built-in solvers of CLP languages like Prolog-based systems.[33] By focusing on rewriting and adding constraints, CHR facilitates the creation of efficient, modular propagators that enhance constraint solving without altering the underlying CLP engine. The syntax of CHR rules centers on a head, an optional guard, and a body, where the head consists of one or more user-defined constraints, the guard is a conjunction of built-in tests, and the body includes new constraints and simplifications.[33] CHR distinguishes three primary types based on their operational effect: simplification, propagation, and simpagation. Simplification rules, denotedHead <=> Guard | Body, replace matching constraints in the head with those in the body while preserving logical equivalence, effectively reducing the constraint store.[33] Propagation rules, written Head ==> Guard | Body, add the body constraints as logical redundancies without removing the head, allowing derived information to trigger further inferences. Simpagation rules combine elements of both, using KeptHead \ RemovedHead <=> Guard | Body to retain the kept head constraints, remove the others, and introduce the body, which can lead to more concise and efficient rule sets compared to separate simplification and propagation.[33]
In CLP systems, CHR integrates as a host language extension, where CHR constraints coexist with the CLP's native built-in constraints, enabling bidirectional interaction: CHR rules can invoke built-in operations (e.g., arithmetic tests in the guard), and CLP queries can activate CHR propagators. This setup allows CHR to serve as a meta-language for defining propagators that are independent of specific constraint domains, such as finite domains or linear arithmetic, by transforming user-defined constraints into calls to the underlying solver.[33] Operationally, CHR employs a top-down, committed-choice execution model: when multiple rules match a constraint, the first applicable one (in textual order) fires without backtracking, ensuring termination under confluence conditions. Declaratively, CHR rules correspond to logical implications, providing a clean semantics for constraint transformation.[33]
A representative example is a propagation rule implementing transitivity for strict inequalities in a numerical domain:
transitivity @ X #< Y, Y #< Z ==> X #< Z.
transitivity @ X #< Y, Y #< Z ==> X #< Z.
X #< Z whenever X #< Y and Y #< Z are present, enabling chain inferences without removing the original constraints, which is particularly useful in optimization problems. For simplification, consider a rule replacing redundant equality:
eq_simp @ X = Y \ X #= 1 <=> true | X = 1.
eq_simp @ X = Y \ X #= 1 <=> true | X = 1.
X = Y and X #= 1 hold, the rule keeps X = Y but replaces X #= 1 with X = 1, tightening the store.[33] Such rules demonstrate CHR's role in building dynamic, custom propagation logic atop CLP frameworks.
Extensions and Variants
Concurrent CLP
Concurrent constraint logic programming (CCLP) extends the CLP paradigm to enable concurrent execution of processes that interact through a shared constraint store, facilitating distributed solving of constraint problems. In this setting, the standard CLP constraint store is adapted for concurrent access, where multiple processes can simultaneously add (tell) and query (ask) constraints without explicit synchronization primitives. This model supports reactive behaviors, as processes synchronize and communicate implicitly via the evolution of the shared store, making it suitable for modeling distributed systems.[34] A key realization of CCLP is cc(FD), a declarative, nondeterministic language for constraints over finite domains that employs ask/tell operations on a shared store to manage concurrency and nondeterminism. Developed as an extension of the concurrent constraint (cc) framework, cc(FD) integrates domain-specific propagation techniques, such as indexical constraints, to efficiently narrow variable domains during concurrent execution. Programs in cc(FD) are written in a Prolog-like syntax, where constraints are posted to the store, and execution proceeds through parallel exploration of alternatives, enabling high-level descriptions of concurrent constraint solving. Experimental evaluations demonstrate that cc(FD) achieves competitive performance on combinatorial problems like graph coloring and scheduling, outperforming sequential CLP systems in parallel environments.[35][36] In CCLP models, such as the cc framework, multiple agents operate concurrently by communicating exclusively through constraints added to a global store, allowing for coordinated decision-making without direct messaging. Agents post local constraints representing their knowledge or goals, and the system propagates these to detect inconsistencies or entailments across the distributed environment, supporting applications in reactive and multi-agent coordination. This communication model ensures monotonic growth of information in the store, promoting determinism in observable behaviors despite underlying nondeterminism.[37] Parallel search in CCLP focuses on distributing the labeling phase—where variables are assigned values—across multiple processors to accelerate exploration of the search space, incorporating load balancing to mitigate imbalances in computational effort. Techniques involve work-stealing or dynamic partitioning of search subtrees, where processors request additional work from idle peers based on estimated costs, achieving significant speedups on large-scale constraint satisfaction problems like vehicle routing. Load balancing is critical, as uneven distribution can lead to idle processors; heuristics estimate subtree sizes using domain metrics to allocate tasks proportionally.[38][39] A representative application is multi-agent scheduling, as implemented in concurrent constraint languages like Oz, where each agent manages a subset of tasks or resources and posts local constraints (e.g., time windows or resource availability) to a shared global store. Concurrent propagation across agents resolves conflicts and synchronizes schedules, enabling efficient solutions to distributed problems such as job-shop scheduling with inter-agent dependencies. This approach has been shown to handle dynamic rescheduling effectively, adapting to changes in the environment through incremental constraint addition.[40]Bottom-Up Evaluation
Bottom-up evaluation in constraint logic programming (CLP) offers a declarative strategy for computing solutions by iteratively generating facts and their associated constraints from program rules, contrasting with goal-directed search by ensuring completeness through fixpoint semantics. This method is especially suited to deductive database applications and large-scale query processing, where it avoids redundant derivations by building solutions from the ground up.[41] Tabling extends bottom-up evaluation in CLP by memoizing partial solutions, including their constraint stores, to prevent recomputation of subgoals. In tabled CLP (TCLP) systems, such as those built on the XSB tabled logic programming engine augmented with attributed variables for constraints, tables capture bindings and constraints from completed subcomputations, enabling reuse across multiple query branches. This technique proves effective for handling recursion and non-determinism, as seen in implementations that integrate constraint solvers incrementally with the tabling mechanism.[42] Fixpoint computation underlies bottom-up evaluation, applying the immediate consequence operator repeatedly to an initial set of facts until the least fixpoint—a complete set of derivable facts with constraints—is obtained. In CLP variants like constraint Datalog, this iteration generates atomic formulas over domains such as integers, where each new fact carries propagated arithmetic constraints from rule applications. The process terminates for programs with stratified negation or linear constraints, yielding solutions representable as linear arithmetic formulas.[43] Constraint handling in bottom-up evaluation involves propagating stores during fixpoint iterations, often optimized via abstract interpretation to replace explicit constraint solving with simplified operations like set unions and conjunctions. For Datalog-like CLP programs, this enables efficient query optimization by transforming constraints into tractable forms, such as geometric representations for arithmetic domains, avoiding explosion in the constraint space.[44][41] A representative example is computing the transitive closure of a graph with path length constraints. Consider a program with rules likepath(X, Y, L) :- edge(X, Y), L = 1. and path(X, Y, L) :- path(X, Z, M), edge(Z, Y), L = M + 1, L <= K., where K bounds path lengths. Bottom-up evaluation starts with base facts from edge and iteratively applies the recursive rule, accumulating and propagating length constraints to derive all reachable pairs (X, Y) satisfying the bound at the fixpoint, useful for optimized reachability queries in constraint databases.[43]
Integration with Other Systems
Constraint logic programming (CLP) often integrates with mixed integer programming (MIP) solvers to leverage their optimization capabilities for problems involving linear constraints and integer variables. In systems like ECLiPSe, the Eplex library provides a declarative interface for modeling MIP problems using CLP syntax, such as linear equalities and integrality constraints, while interfacing procedurally with external MIP solvers like CPLEX through primitives for setup and solving.[45] This integration enables event-driven propagation, where changes in CLP constraints automatically trigger updates in the MIP solver, improving efficiency for hybrid search strategies like branch-and-cut.[45] Similarly, the SCIP framework implements constraint integer programming (CIP), combining CLP's general constraint handling with MIP's branch-and-cut techniques in a unified search tree, supporting domain propagation and cutting planes for nonlinear extensions.[46] Another approach transforms CLP programs by eliminating disjunctions with auxiliary variables and maps them to MIP formulations solvable by CPLEX, enhancing bound tightening through finite domain propagation and dual simplex methods.[47] CLP also hybridizes with satisfiability modulo theories (SMT) solvers to handle complex logical and arithmetic constraints. The Picat language, which extends CLP principles with rule-based programming, supports direct integration with SMT solvers like Z3 alongside MIP and SAT modules, allowing seamless switching between solvers for constraint solving over finite domains and theories.[48] This enables encoding CLP problems into SMT-LIB format, facilitating verification tasks by translating Prolog-like CLP subsets into SMT inputs for solvers like Z3 or cvc4.[49] Such hybrids benefit from SMT's strength in handling quantifiers and theories while retaining CLP's search and propagation mechanisms.[49] Embeddings of CLP into other paradigms expand its applicability. In functional languages, Curry integrates CLP features like equational constraints and finite domain solving with lazy evaluation and narrowing, using operations such as=:= for constraint posting and external solver interfaces via primitives like tell and ask.[50] This allows logical variables in functional expressions, supporting higher-order functions and non-determinism for demand-driven computation.[50] For object-oriented systems, integrations like the Kaleidoscope language embed constraints into OO features, using graph rewriting on constraint networks to propagate changes across objects, combining inheritance and encapsulation with CLP's declarative solving.[51] More recent efforts, such as constraint-logic OO languages, introduce novel concepts like constraint methods and reactive objects to unify OO encapsulation with CLP propagation, enabling dynamic constraint resolution in class hierarchies.[52]
Lazy CLP variants incorporate demand-driven evaluation through lazy narrowing, optimizing computation by evaluating only required parts of expressions. In constraint functional logic programming over finite domains (CFLP(FD)), lazy narrowing uses definitional trees to guide search, integrating FD constraints with functional features like currying and higher-order patterns, as implemented in the TOY(FD) system.[53] This demand-driven strategy reduces overhead by suspending evaluation on uninstantiated variables, improving efficiency over strict CLP schemes for problems with partial information.[53]
A practical example of external solver integration occurs in CLP(R) for handling nonlinear real constraints, where internal solvers focus on linear cases and defer nonlinear ones. Cooperative architectures use protocols to interface CLP(R) with external tools like Maple for symbolic solving or Gröbner bases via ECLiPSe, coordinating a constraint manager with linear solvers to exchange data and resolve non-linearities incrementally.[54] This hybrid approach maintains CLP(R)'s propagation while outsourcing complex nonlinear solving, ensuring consistency in domains like real arithmetic.[54]
