Recent from talks
Nothing was collected or created yet.
Conflict-driven clause learning
View on WikipediaIn computer science, conflict-driven clause learning (CDCL) is an algorithm for solving the Boolean satisfiability problem (SAT). Given a Boolean formula, the SAT problem asks for an assignment of variables so that the entire formula evaluates to true. The internal workings of CDCL SAT solvers were inspired by DPLL solvers. The main difference between CDCL and DPLL is that CDCL's backjumping is non-chronological.
Conflict-driven clause learning was proposed by Marques-Silva and Karem A. Sakallah (1996, 1999)[1][2] and Bayardo and Schrag (1997).[3]
Background
[edit]Boolean satisfiability problem
[edit]The satisfiability problem consists in finding a satisfying assignment for a given formula in conjunctive normal form (CNF).
An example of such a formula is:
or, using a common notation:[4]
where A,B,C are Boolean variables, , , , and are literals, and and are clauses.
A satisfying assignment for this formula is e.g.:
since it makes the first clause true (since is true) as well as the second one (since is true).
This examples uses three variables (A, B, C), and there are two possible assignments (True and False) for each of them. So one has possibilities. In this small example, one can use brute-force search to try all possible assignments and check if they satisfy the formula. But in realistic applications with millions of variables and clauses brute force search is impractical. The responsibility of a SAT solver is to find a satisfying assignment efficiently and quickly by applying different heuristics for complex CNF formulas.
Unit clause rule (unit propagation)
[edit]If a clause has all but one of its literals or variables evaluated at False, then the free literal must be True in order for the clause to be True. For example, if the below unsatisfied clause is evaluated with and we must have in order for the clause to be true.
The iterated application of the unit clause rule is referred to as unit propagation or Boolean constraint propagation (BCP).
Resolution
[edit]Consider two clauses and . The clause , obtained by merging the two clauses and removing both and , is called the resolvent of the two clauses.
Algorithm
[edit]This section needs expansion. You can help by adding to it. (July 2024) |
Formalization
[edit]A sequent calculus-similar notation can be used to formalize many rewriting algorithms, including CDCL. The following are the rules a CDCL solver can apply in order to either show there is no satisfying assignment, or find one, i.e. and conflict clause .[5]
Propagate If a clause in the formula has exactly one unassigned literal in , with all other literals in the clause assigned false in , extend with . This rule represents the idea a currently false clause with only one unset variable left forces that variable to be set in such a way as to make the entire clause true, otherwise the formula will not be satisfied.
Decide If a literal is in the set of literals of and neither nor is in , then decide on the truth value of and extend with the decision literal . This rule represents the idea that if you aren't forced to do an assignment, you must choose a variable to assign and make note which assignment was a choice so you can go back if the choice didn't result in a satisfying assignment.
Conflict If there is a conflicting clause such that their negations are in , set the conflict clause to . This rule represents detecting a conflict when all literals in a clause are assigned to false under the current assignment.
Explain If the conflict clause is of the form , there is an antecedent clause and are assigned before in , then explain the conflict by resolving with the antecedent clause. This rule explains the conflict by deriving a new conflict clause that is implied by the current conflict clause and a clause that caused the assignment of a literal in the conflict clause.
Backjump If the conflict clause is of the form where , then backjump to decision level and assign and set . This rule performs a non-chronological backtracking by jumping back to a decision level implied by the conflict clause and asserting the negation of the literal that caused the conflict at a lower decision level.
Learn Learned clauses can be added to the formula . This rule represents the clause learning mechanism of CDCL solvers, where conflict clauses are added back to the clause database to prevent the solver from making the same mistake again in other branches of the search tree.
These 6 rules are sufficient for basic CDCL, but modern SAT solver implementations also usually add additional heuristic-controlled rules in order to be more efficient at traversing the search space and solve SAT problems faster.
Forget Learned clauses can be removed from the formula to save memory. This rule represents the clause forgetting mechanism, where less useful learned clauses are removed to control the size of the clause database. denotes that the formula without the clause still implies , meaning is redundant.
Restart The solver can be restarted by resetting the assignment to the empty assignment and setting the conflict clause to . This rule represents the restart mechanism, which allows the solver to jump out of a potentially unproductive search space and start over, often guided by the learned clauses. Note, learned clauses are still remembered through restarts, ensuring termination of the algorithm.
Visualization
[edit]Conflict-driven clause learning works as follows.
- Select a variable and assign True or False. This is called decision state. Remember the assignment.
- Apply Boolean constraint propagation (unit propagation).
- Build the implication graph.
- If there is any conflict
- Find the cut in the implication graph that led to the conflict
- Derive a new clause which is the negation of the assignments that led to the conflict
- Non-chronologically backtrack ("back jump") to the appropriate decision level, where the first-assigned variable involved in the conflict was assigned
- Otherwise continue from step 1 until all variable values are assigned.
Example
[edit]A visual example of CDCL algorithm:[4]
-
At first pick a branching variable, namely x1. A yellow circle means an arbitrary decision.
-
Now apply unit propagation, which yields that x4 must be 1 (i.e. True). A gray circle means a forced variable assignment during unit propagation. The resulting graph is called an implication graph.
-
Arbitrarily pick another branching variable, x3.
-
Apply unit propagation and find the new implication graph.
-
Here the variables x8 and x12 are forced to be 0 and 1, respectively.
-
Pick another branching variable, x2.
-
Find implication graph.
-
Pick another branching variable, x7.
-
Find implication graph.
-
Found a conflict!
-
Find the cut that led to this conflict. From the cut, find a conflicting condition.
-
Take the negation of this condition and make it a clause.
-
Add the conflict clause to the problem.
-
Non-chronological back jump to appropriate decision level, which in this case is the second highest decision level of the literals in the learned clause.
-
Back jump and set variable values accordingly.
Completeness
[edit]DPLL is a sound and complete algorithm for SAT, that is a formula ϕ is satisfiable if, and only if, DPLL can find a satisfying assignment for ϕ. CDCL SAT solvers implement DPLL, but can learn new clauses and backtrack non-chronologically. Clause learning with conflict analysis affects neither soundness nor completeness. Conflict analysis identifies new clauses using the resolution operation. Therefore, each learnt clause can be inferred from the original clauses and other learnt clauses by a sequence of resolution steps. If cN is the new learnt clause, then ϕ is satisfiable if and only if ϕ ∪ {cN} is also satisfiable. Moreover, the modified backtracking step also does not affect soundness or completeness, since backtracking information is obtained from each new learnt clause.[6]
Applications
[edit]The main application of CDCL algorithm is in different SAT solvers including:
- MiniSAT
- Zchaff SAT
- Z3
- Glucose[7]
- ManySAT etc.
The CDCL algorithm has made SAT solvers so powerful that they are being used effectively in several[citation needed] real world application areas like AI planning, bioinformatics, software test pattern generation, software package dependencies, hardware and software model checking, and cryptography.
Related algorithms
[edit]Related algorithms to CDCL are the Davis–Putnam algorithm and DPLL algorithm. The DP algorithm uses resolution refutation and it has potential memory access problem.[citation needed] Whereas the DPLL algorithm is OK for randomly generated instances, it is bad for instances generated in practical applications. CDCL is a more powerful approach to solve such problems in that applying CDCL provides less state space search in comparison to DPLL.
-
DPLL: no learning and chronological backtracking.
-
CDCL: conflict-driven clause learning and non – chronological backtracking.
Works cited
[edit]- ^ J.P. Marques-Silva; Karem A. Sakallah (November 1996). "GRASP-A New Search Algorithm for Satisfiability". Digest of IEEE International Conference on Computer-Aided Design (ICCAD). pp. 220–227. CiteSeerX 10.1.1.49.2075. doi:10.1109/ICCAD.1996.569607. ISBN 978-0-8186-7597-3.
- ^ J.P. Marques-Silva; Karem A. Sakallah (May 1999). "GRASP: A Search Algorithm for Propositional Satisfiability" (PDF). IEEE Transactions on Computers. 48 (5): 506–521. doi:10.1109/12.769433. Archived from the original (PDF) on 2016-03-04. Retrieved 2014-11-29.
- ^ Roberto J. Bayardo Jr.; Robert C. Schrag (1997). "Using CSP look-back techniques to solve real world SAT instances" (PDF). Proc. 14th Nat. Conf. on Artificial Intelligence (AAAI). pp. 203–208.
- ^ a b In the pictures below, "" is used to denote "or", multiplication to denote "and", and a postfix "" to denote "not".
- ^ "Wayback Machine" (PDF). mathweb.ucsd.edu. Archived from the original (PDF) on 2024-05-19. Retrieved 2025-10-02.
- ^ Marques-Silva, Joao; Lynce, Ines; Malik, Sharad (February 2009). Handbook of Satisfiability (PDF). IOS Press. p. 138. ISBN 978-1-60750-376-7.
- ^ "Glucose's home page".
References
[edit]- Martin Davis; Hilary Putnam (1960). "A Computing Procedure for Quantification Theory". J. ACM. 7 (3): 201–215. doi:10.1145/321033.321034. S2CID 31888376.
- Martin Davis; George Logemann; Donald Loveland (Jul 1962). "A machine program for theorem-proving". Communications of the ACM. 5 (7): 394–397. doi:10.1145/368273.368557. hdl:2027/mdp.39015095248095. S2CID 15866917.
- Matthew W. Moskewicz; Conor F. Madigan; Ying Zhao; Lintao Zhang; Sharad Malik (2001). "Chaff: engineering an efficient SAT solver" (PDF). Proc. 38th Ann. Design Automation Conference (DAC). pp. 530–535.
- Lintao Zhang; Conor F. Madigan; Matthew H. Moskewicz; Sharad Malik (2001). "Efficient conflict driven learning in a boolean satisfiability solver" (PDF). Proc. IEEE/ACM Int. Conf. on Computer-aided design (ICCAD). pp. 279–285.
- Presentation – "SAT-Solving: From Davis-Putnam to Zchaff and Beyond" by Lintao Zhang. (Several pictures are taken from his presentation)
Conflict-driven clause learning
View on GrokipediaFundamentals of Boolean Satisfiability
Propositional Logic and CNF Formulas
Propositional logic, also known as Boolean logic, forms the foundational framework for the Boolean satisfiability (SAT) problem addressed by conflict-driven clause learning (CDCL) solvers. It consists of propositional variables, or atoms, typically denoted as , which can take truth values true (1) or false (0). A literal is either a variable (positive literal) or its negation (negative literal). A clause is a disjunction of one or more literals, such as , which is satisfied if at least one literal is true. A propositional formula is then a conjunction of clauses, representing the overall logical expression.[1] Conjunctive normal form (CNF) is a standardized representation where a formula is expressed as a conjunction of distinct clauses, i.e., . SAT problems are typically solved in CNF because this form enables efficient propagation and conflict detection mechanisms in solvers, such as unit propagation, which simplifies clauses by assigning values that force literals to true or false. For instance, the formula is already in CNF and is satisfiable under the assignment . General propositional formulas, which may include negations, conjunctions, or disjunctions in arbitrary nesting, can be converted to an equisatisfiable CNF using transformations like the Tseitin encoding, which introduces auxiliary variables to preserve satisfiability while linearizing the structure into clauses.[1] CDCL operates exclusively on CNF-SAT instances, where the objective is to determine if there exists a truth assignment to the variables such that every clause is satisfied, or to prove unsatisfiability if no such exists. This restriction to CNF allows CDCL to leverage clause-based learning and backtracking directly on the disjunctive structure.[1][4]The SAT Problem
The Boolean satisfiability problem (SAT) is the canonical decision problem in computational complexity theory, central to the study of propositional logic satisfiability. It involves determining whether a given Boolean formula, typically encoded in conjunctive normal form (CNF), can be satisfied by some assignment of truth values to its variables. A CNF formula consists of a conjunction of clauses, where each clause is a disjunction of literals (variables or their negations). The problem thus asks: for a CNF formula φ over a set of Boolean variables, does there exist a truth assignment σ such that φ(σ) = true?[5] SAT holds historical significance as the first problem demonstrated to be NP-complete. In 1971, Stephen Cook proved that SAT is NP-complete via his theorem, establishing that any problem in NP can be reduced to SAT in polynomial time, and SAT itself is verifiable in polynomial time by checking a proposed assignment.[5] This result laid the foundation for the theory of NP-completeness, highlighting SAT's role in understanding the boundaries of efficient computation.[5] The NP-completeness of SAT implies that no polynomial-time algorithm is known for solving it in the worst case, leading to exponential time complexity for general instances in the absence of a breakthrough resolving the P vs. NP question.[5] Despite this theoretical hardness, SAT exhibits practical solvability for many real-world instances, where modern solvers routinely handle formulas with millions of variables and clauses efficiently due to problem structure and algorithmic advances.[6] While this entry focuses on the decision version of SAT, it is distinct from the counting variant #SAT, which seeks the total number of satisfying assignments for a CNF formula and is known to be #P-complete, a harder class capturing enumeration challenges.[7]Unit Propagation and Implication Graphs
Unit propagation serves as the core inference mechanism in conflict-driven clause learning (CDCL) solvers, enabling efficient deduction of variable assignments from partial interpretations of conjunctive normal form (CNF) formulas. The unit clause rule states that if a clause reduces to a single unassigned literal under a partial assignment—meaning all other literals in the clause are falsified—then must be assigned true to satisfy the clause and preserve the satisfiability of the formula.[8] Formally, for a clause , if the partial assignment falsifies all literals except , then it implies . This rule is applied recursively: newly implied literals are propagated through the formula, potentially creating additional unit clauses, until a fixed point is reached where no further unit clauses exist.[8] The propagation process is typically implemented using a queue-based approach, such as a first-in-first-out (FIFO) queue or watched literal scheme, to track and process unit clauses efficiently. Starting with an initial partial assignment, unit literals are enqueued and propagated by scanning relevant clauses; each propagation updates the assignment and checks for new unit clauses, continuing until the queue empties or a conflict arises. This recursive application ensures all logical consequences of the current assignment are deduced without exhaustive enumeration, and the procedure is sound, meaning it only infers assignments that are necessary for any satisfying assignment of the original formula.[4] Moreover, unit propagation runs in polynomial time—linear in the total size of the formula under standard implementations—making it a computationally efficient building block for avoiding full search trees in SAT solving.[1] In CDCL, unit propagation is visualized and analyzed through the implication graph, a directed graph that captures the deductive chains produced by the process. The nodes of the graph represent literals (variables and their negations), and a directed edge from literal to (or vice versa) arises from a unit clause implication: specifically, for a clause , assigning forces , creating the edge ; similarly, the contrapositive adds . Paths in this graph trace sequences of implications from decisions or prior assignments, revealing how conflicts propagate and enabling clause learning by identifying cuts that explain unsatisfiability. This graphical representation not only formalizes the propagation's transitive effects but also underpins the solver's ability to prune redundant search paths effectively.[4]Key Mechanisms in CDCL
Decision Heuristics and Branching
In conflict-driven clause learning (CDCL) solvers, the search process builds a backtracking tree by extending partial assignments of the Boolean formula, where each node represents a partial assignment and edges correspond to decisions on unset variables.[9] The solver starts with an empty assignment and iteratively selects an unassigned variable to branch on, creating two child nodes: one assigning the variable to true and the other to false, followed by unit propagation to infer implied literals in each branch.[9] Branching involves selecting a free variable and exploring both polarities ( and ), with the choice of guided by heuristics to prioritize promising paths.[9] Static heuristics base variable selection solely on the initial formula structure, such as choosing variables with the most literal occurrences (e.g., MOM's heuristic) to enable early constraint propagation.[10] In contrast, dynamic heuristics adapt to the search state and conflicts encountered, with the variable state independent decaying sum (VSIDS) being a seminal example that assigns activity scores to variables based on their appearance in recent original and learned clauses.[9] VSIDS initializes scores by literal occurrence frequencies and increments the scores of literals in clauses involved in conflicts, followed by periodic decay to emphasize recent activity; the solver then branches on the highest-score unassigned variable.[9] This preference for high-activity variables accelerates conflict detection by focusing the search on formula regions likely to reveal inconsistencies quickly, as empirically demonstrated in industrial benchmarks where VSIDS-enabled solvers like Chaff solved instances orders of magnitude faster than prior approaches.[9] Variants like literal block distance (LBD) refine VSIDS by incorporating the number of distinct decision levels in learned clauses, bumping variables that contribute to low-LBD (high-quality) clauses to further guide toward effective learning.[11] Heuristics such as VSIDS are empirical yet pivotal for CDCL performance, with scores updated during clause learning to integrate conflict-driven insights into future branching.[9] Additionally, polarity selection determines the order of exploring true/false branches for the chosen variable, often using a preferred polarity heuristic that favors the last assigned value (phase saving) to reduce unnecessary flips and reuse successful partial assignments.[12]Conflict Detection and Analysis
In conflict-driven clause learning (CDCL) solvers, a conflict arises when unit propagation, applied to the current partial assignment of variables, results in the falsification of a clause, meaning all literals in that clause evaluate to false under the assignment.[1] This detection occurs during Boolean constraint propagation (BCP), where unit propagation iteratively enforces implied assignments from unit clauses until no further implications exist or a contradiction is reached, signaling an unsatisfied clause.[9] The implication graph, a directed graph representing these propagation steps with vertices for literals and edges labeled by antecedent clauses, captures the dependencies leading to the conflict, marked by a special conflict vertex.[4] Conflict analysis begins with a backward traversal of the implication graph starting from the conflict vertex to trace the reasons for the conflicting assignments, partitioning implications by decision levels to identify the causes rooted in prior decisions.[1] This process recursively resolves antecedents, selecting literals assigned at lower decision levels, until a cut in the graph is found that explains the conflict through a set of implying clauses.[4] The analysis enables non-chronological backtracking by revealing how decisions propagate to contradictions, allowing the solver to jump back to an earlier level rather than the immediate parent.[9] A key concept in this analysis is the First Unique Implication Point (FUIP), defined as the earliest decision literal from which there exists a unique path in the implication graph implying the conflict, dominating all paths to the conflicting literals.[1] The traversal identifies the FUIP by continuing resolution until the current decision level contains only one literal from the decision variable, ensuring the derived explanation is as assertive as possible.[9] For conflicting literals and , if there are implication paths from a decision literal to both and , the clause is obtained by successive resolution along these paths, cutting the graph at the first UIP to yield a learned clause that blocks the conflicting subtree.[4] This FUIP-based cutting prunes the search space effectively, as demonstrated in early implementations where it reduced the size of learned explanations and improved backjump levels compared to full resolution approaches.[9] The resulting clause from the analysis directly implies the negation of the conflicting decision upon re-propagation, facilitating deeper insights into the formula's structure.[1]Clause Learning and Forgetting
In conflict-driven clause learning (CDCL), clause learning derives new clauses from conflicts identified during unit propagation, adding them to the clause database to prune the search space and avoid redundant computations.[13] This process builds on conflict analysis by traversing the implication graph backward from the conflict to identify reasons for assignments, ultimately producing a learned clause that explains the conflict.[14] The core of clause learning involves resolving clauses along a cut in the implication graph defined by a unique implication point (UIP), which is a literal at the current decision level through which all paths from the decision literal to the conflict pass.[14] Typically, the first UIP (1-UIP) closest to the conflict is selected, ensuring the resulting clause is asserting—meaning it contains exactly one literal from the current decision level and forces the negation of that literal upon backtracking.[14] To derive this clause, binary resolution is applied iteratively between the conflicting clauses and antecedent clauses (reasons for implications), eliminating pivot literals until the UIP cut is reached; this yields a clause that subsumes the reasons leading to the conflict.[13] For example, resolving a unit clause implying a pivot with another clause sharing the negated pivot reduces the clause size, strengthening the learned clause.[14] Learned clauses are sound because they are logically implied by the original formula, as the resolution steps preserve satisfiability; they shorten resolution proofs by capturing multi-step implications in a single clause, thereby guiding future unit propagation to detect conflicts earlier.[13] This mechanism was first introduced in the GRASP solver, where conflict-induced clauses enable non-chronological backtracking.[13] Modern solvers like MiniSat extensively employ 1-UIP learning with additional minimization techniques, such as removing redundant literals via subsumption checks against antecedent clauses, to further enhance efficiency.[15] To manage the growing clause database and prevent performance degradation from excessive clauses, CDCL solvers implement forgetting strategies.[16] One approach is restarting, which resets the search state and often involves partial removal of learned clauses to mitigate long-tailed runtime distributions caused by accumulated clauses leading solvers astray.[16] Alternatively, selective clause deletion targets less useful clauses based on activity heuristics, such as tracking how recently a clause was involved in conflicts (with decay over time) or metrics like literal block distance, ensuring the database remains compact while retaining high-impact clauses.[15] In MiniSat, for instance, learned clauses are aggressively deleted when their activity falls below thresholds, dynamically adjusting limits after restarts to balance exploration and exploitation.[15]The CDCL Algorithm
High-Level Procedure
Conflict-driven clause learning (CDCL) extends the Davis–Putnam–Logemann–Loveland (DPLL) algorithm by incorporating clause learning from conflicts to guide the search more efficiently.[1] The process begins with an empty partial assignment to the variables in the conjunctive normal form (CNF) formula. Unit propagation is then applied to deduce implied literal assignments until no further units exist or a conflict arises. If variables remain unassigned without conflict, a decision heuristic selects an undecided variable for branching, assigning it a value and increasing the decision level.[1] The core loop of CDCL operates as follows: while the assignment is partial and no conflict is detected, perform unit propagation followed by a decision if necessary; upon detecting a conflict during propagation or evaluation, conduct conflict analysis to derive a learned clause, add it to the formula, and backtrack.[1] This backtracking is non-chronological, jumping to the highest decision level where the learned clause becomes asserting (i.e., unit under the current partial assignment), often the first unique implication point (UIP), which prunes the search space more effectively than DPLL's chronological backtracking.[1] Historically, CDCL evolved from earlier solvers like RELSAT (Bayardo and Schrag, 1997), which introduced look-back techniques for learning and non-chronological backtracking in SAT solving,[17] and GRASP, which integrated clause learning with implication graphs to exploit propagation structures.[13] The approach was standardized and optimized in the Chaff solver, which popularized key engineering choices like watched literals and the VSIDS heuristic.[9] A high-level pseudocode outline of CDCL is:function CDCL(φ):
σ ← empty assignment
dl ← 0
while true:
propagate_units(φ, σ)
if conflict_detected(σ):
(learned_clause, backjump_level) ← analyze_conflict(σ)
if backjump_level < 0:
return UNSAT
add learned_clause to φ
backtrack_to(σ, backjump_level)
elif all_variables_assigned(σ):
return SAT
else:
var ← select_branching_variable(σ)
dl ← dl + 1
assign(σ, var, true) // or false for the other branch
function CDCL(φ):
σ ← empty assignment
dl ← 0
while true:
propagate_units(φ, σ)
if conflict_detected(σ):
(learned_clause, backjump_level) ← analyze_conflict(σ)
if backjump_level < 0:
return UNSAT
add learned_clause to φ
backtrack_to(σ, backjump_level)
elif all_variables_assigned(σ):
return SAT
else:
var ← select_branching_variable(σ)
dl ← dl + 1
assign(σ, var, true) // or false for the other branch
Detailed Steps and Pseudocode
The conflict-driven clause learning (CDCL) algorithm formalizes the search process for solving Boolean satisfiability problems in conjunctive normal form (CNF) through iterative unit propagation, decision making, conflict analysis, clause learning, and backjumping. The procedure maintains a partial assignment σ to variables, a current decision level dl, and an implication graph G derived from propagation, where edges represent implications from clauses. Unit propagation is performed using the implication graph to enforce implied literals until a fixpoint is reached or a conflict arises, indicating an empty clause.[4][18] If a conflict occurs during propagation, the algorithm invokes conflict analysis on the implication graph G to derive a learned clause C that explains the conflict, typically via resolution steps until reaching the first unique implication point (UIP). The learned clause C is then added to the formula φ, and the algorithm backjumps to the asserting level β, which is the maximum decision level among the literals in C excluding the asserting literal k (the literal in C that becomes unit after backjumping and forces a unique assignment). This backjump undoes assignments beyond level β, potentially asserting k at that level to avoid the conflict.[4][18] If no conflict arises and the formula is not fully assigned, the algorithm selects an unassigned variable for branching, increments the decision level dl, and assigns a value (typically true first, per dynamic heuristics), then resumes propagation. The process repeats until either all variables are assigned (SAT, verified by satisfying all clauses) or a conflict at decision level 0 is detected (UNSAT). Implication graphs are used in propagation to track reasons for assignments efficiently.[4][18] The backjump level β is computed as where level(l) denotes the decision level of literal l, and k is the asserting literal in the learned clause C.[4] The full CDCL algorithm can be expressed in procedural pseudocode as follows, incorporating key subroutines for propagation, conflict analysis, learning, and backtracking:function CDCL(φ, σ): // φ: CNF formula, σ: partial assignment
dl ← 0
while true:
res ← propagate(φ, σ) // Unit propagation until fixpoint or conflict
if res = CONFLICT:
C, β ← analyze_conflict(G) // Analyze implication graph G for clause C and level β
if β < 0: // Conflict at root level
return UNSAT
learn_clause(φ, C) // Add learned clause C to φ
backtrack(σ, β) // Backjump to level β, assert k if unit
elif all variables assigned in σ:
return SAT
else:
x ← select_variable(σ) // Branching variable
dl ← dl + 1
σ ← σ ∪ {(x, true)} // Tentative assignment at new level
// Propagate will handle implications in next iteration
function propagate(φ, σ):
while there exists a unit clause in φ under σ:
assign the implied literal to σ, update G with implications
if there exists an empty clause:
return CONFLICT
return NO_CONFLICT
function analyze_conflict(G):
// Traverse G backward from conflict literals, resolve until first UIP
C ← derive_learned_clause_via_resolution(G)
k ← find_asserting_literal(C, σ) // Literal that will be unit after backjump
β ← max{level(l) for l in C \ {k}}
return C, β
function learn_clause(φ, C):
φ ← φ ∪ {C} // Add to clause database (with possible forgetting later)
function backtrack(σ, β):
undo assignments in σ with level > β
dl ← β
if exists asserting literal k at level β:
assign k to σ
function CDCL(φ, σ): // φ: CNF formula, σ: partial assignment
dl ← 0
while true:
res ← propagate(φ, σ) // Unit propagation until fixpoint or conflict
if res = CONFLICT:
C, β ← analyze_conflict(G) // Analyze implication graph G for clause C and level β
if β < 0: // Conflict at root level
return UNSAT
learn_clause(φ, C) // Add learned clause C to φ
backtrack(σ, β) // Backjump to level β, assert k if unit
elif all variables assigned in σ:
return SAT
else:
x ← select_variable(σ) // Branching variable
dl ← dl + 1
σ ← σ ∪ {(x, true)} // Tentative assignment at new level
// Propagate will handle implications in next iteration
function propagate(φ, σ):
while there exists a unit clause in φ under σ:
assign the implied literal to σ, update G with implications
if there exists an empty clause:
return CONFLICT
return NO_CONFLICT
function analyze_conflict(G):
// Traverse G backward from conflict literals, resolve until first UIP
C ← derive_learned_clause_via_resolution(G)
k ← find_asserting_literal(C, σ) // Literal that will be unit after backjump
β ← max{level(l) for l in C \ {k}}
return C, β
function learn_clause(φ, C):
φ ← φ ∪ {C} // Add to clause database (with possible forgetting later)
function backtrack(σ, β):
undo assignments in σ with level > β
dl ← β
if exists asserting literal k at level β:
assign k to σ
