next up previous
Next: Length-bounded Learning Up: Boolean Satisfiability Previous: Unit Propagation: The Inner

Learning and Relevance

Let us now turn to the final column of our table, considering the progress that has been made in avoiding rework in Boolean satisfiability engines. The basic idea is to avoid situations where a conventional implementation will ``thrash'', solving the same subproblem many times in different contexts.

To understand the source of the difficulty, consider an example involving a SAT problem $C$ with variables $v_{1},v_{2},
\ldots,v_{100}$, and suppose also that the subset of $C$ involving only variables $v_{50},\dots,v_{100}$ in fact implies that $v_{50}$ must be true.

Now imagine that we have constructed a partial solution that values the variables $v_{1},\ldots ,v_{49}$, and that we initially set $v_{50}$ to false. After some amount of backtracking, we realize that $v_{50}$ must be true. Unfortunately, if we subsequently change the value of one of the $v_i$'s for $i<50$, we will ``forget'' that $v_{50}$ needs to be true and are in danger of setting it to false once again, followed by a repetition of the search that showed $v_{50}$ to be a consequence of $C$. Indeed, we are in danger of solving this subproblem not just twice, but once for each search node that we examine in the space of $v_i$'s for $i<50$.

As we have indicated, the solution to this problem is to cache the fact that $v_{50}$ needs to be true; this information is generally saved in the form of a new clause called a nogood. In this case, we record the unary clause $v_{50}$. Modifying our original problem $C$ in this way will allow us to immediately prune any subproblem for which we have set $v_{50}$ to false. This technique was introduced by Stallman and Sussman Sussman:ddb in dependency directed backtracking.

Learning new constraints in this way can prevent thrashing. When a contradiction is encountered, the set of assignments that caused the contradiction is identified; we will call this set the conflict set. A new constraint can then be constructed that excludes the assignments in the conflict set. Adding this constraint to the problem will ensure that the faulty set of assignments is avoided in the future.

This description of learning is fairly syntactic; we can also give a more semantic description. Suppose that our partial assignment contains $\{a,\neg b,d,\neg e\}$ and that our problem contains the clauses

\neg a \vee b \vee c \vee \neg d \vee e
\end{displaymath} (1)

\neg c \vee \neg d.
\end{displaymath} (2)

The first clause unit propagates to allow us to conclude $c$; the second allows us to conclude $\neg c$. This contradiction causes us to backtrack, learning the nogood
\neg a \vee b \vee \neg d \vee e.
\end{displaymath} (3)

From a semantic point of view, the derived nogood (3) is simply the result of resolving the reason (1) for $\neg c$ with the reason (2) for $c$.

This is a general phenomenon. At any point, suppose that $v$ is a variable that has been set in the partial assignment $P$. If $v$'s value is the result of a branch choice, there is no associated reason. If $v$'s current value is the result of a unit propagation, however, we associate to $v$ as a reason the clause that produced the propagation. If $v$'s value is the result of a backtrack, that value must be the result of a contradiction identified for some subsequent variable $v'$ and we set the reason for $v$ to be the result of resolving the reasons for $v'$ and $\neg v'$. At any point, any variable whose value has been forced will have an associated reason, and these accumulated reasons will avoid the need to reexamine any particular portion of the search space.

Modifying DPLL to exploit the derived information requires that we include the derived clauses in the overall problem $C$, thus enabling new unit propagations and restricting subsequent search. But there is an implicit change as well.

In our earlier example, suppose that we have set the variables $v_1,\dots,v_{49}$ in that order, and that we have learned the nogood

\begin{displaymath}v_7 \vee \neg v_9
\end{displaymath} (4)

(presumably in a situation where $v_7$ is false and $v_9$ is true). Now as long as $v_7$ remains false and $v_9$ remains true, unit propagation will fail immediately because (4) is unsatisfied. This will allow us to backtrack directly to $v_9$ in this example. This is the semantic justification for a technique known as backjumping [GaschnigGaschnig1979] because the search ``jumps back'' to a variable that is relevant to the problem at hand.5

While attractive in theory, however, this technique is difficult to apply in practice. The reason is that a new nogood is learned with every backtrack; since the number of backtracks is proportional to the running time of the program, an exponential number of nogoods can be learned. This can be expected both to overtax the memory available on the system where the algorithm is running, and to increase the time for each node expansion. As the number of clauses containing any particular variable $v$ grows without bound, the unit propagation procedure 2.3 will grind to a halt.

Addressing this problem was the primary focus of work on systematic SAT solvers during the 1990's. Since it was impractical to retain all of the nogoods that had been learned, some method needed to be found that would allow a polynomial number of nogoods to be retained while others were discarded. The hope was that a relatively small number of nogoods could still be used to prune the search space effectively.6

next up previous
Next: Length-bounded Learning Up: Boolean Satisfiability Previous: Unit Propagation: The Inner
Matt Ginsberg 2004-02-19