next up previous
Next: Inference and Learning Up: Unit Propagation Previous: Subsearch

Subsearch and quantification

As we discussed in Section 2.1, efficient implementations of SAT solvers go to great lengths to minimize the amount of time spent solving subsearch problems. While the watched literal idea is the most efficient mechanism known here, we will discuss the problem in terms of a simpler scheme that maintains and updates poss and curr counts for each clause. As discussed earlier, this scheme is about half as fast as the watched literal approach, and the general arguments that we will make can be expected to lead to more than constant-factor improvements.16

For notational convenience in what follows, suppose that $C$ is a quantified theory and that $l$ is a ground literal. By $C_l$ we will mean that subset of the clauses in $C$ that include terms of which $l$ is an instance. If $C$ contains quantified clauses, then $C_l$ will as well; the clauses in $C_l$ can be found by matching the literal $l$ against the clauses in $C$.

As discussed in Section 2.1, it is possible to compute $S_u^s(C,P)$ once during an initialization phase, and then update it incrementally. In terms of Definition 5.1, the update rule might be one such as

\begin{displaymath}S_0^0(C,P') = S_0^0(C,P) \cup S_1^0(C_{\neg l},P)\end{displaymath}

if the literal $l$ is changed from unvalued to true. $P'$ here is the partial assignment after the update; $P$ is the assignment before. To compute the number of fully assigned but unsatisfied clauses after the update, we start with the number before, and add newly unsatisfied clauses (unsatisfied clauses previously containing the single unvalued literal $\neg l$).

As we argued previously, reorganizing the computation in this way leads to substantial speedups because the subsearch problem being solved is no longer NP-complete in the size of $C$, but only in the size of $C_l$ or $C_{\neg l}$. These incremental techniques are essential to the performance of modern search implementations because the runtime of these implementations is dominated by the time spent in propagation (i.e., subsearch).

Given that the subsearch computation time is potentially exponential in the size of the subtheory $C_l$ when the literal $l$ is valued or unvalued, let us now consider the questions of how much of a concern this is in practice, and of what (if anything) can be done about it. After all, one of the primary lessons of recent satisfiability research is that problems that are NP-hard in theory tend strongly not to be exponentially difficult in practice.

Let us begin by noting that subsearch is not likely to be much of an issue for the randomly generated satisfiability problems that were the focus of research in the 1990's and drove the development of algorithms such as WSAT. The reason for this is that if $n$ is the number of clauses in a theory $C$ and $v$ is the number of variables in $C$, then random problems are difficult only for fairly narrow ranges of values of the ratio $n/v$ [Coarfa, Demopoulos, San Miguel Aguirre, Subramanian, VardiCoarfa et al.2000]. For 3-SAT (where every clause in $C$ contains exactly three literals), difficult random problems appear at $n/v \approx 4.2$ [Kirkpatrick SelmanKirkpatrick Selman1994]. For such a problem, the number of clauses in which a particular literal $l$ appears will be small (on average $3 \times 4.2 / 2 = 6.3$ for random 3-SAT). Thus the size of the relevant subtheory $C_l$ or $C_{\neg l}$ will also be small, and while subsearch cost still tends to dominate the running time of the algorithms in question, there is little to be gained by applying sophisticated techniques to reduce the time needed to examine a relative handful of clauses.

For realistic problems, the situation is dramatically different. Here is an axiom from a logistics domain encoded in SATPLAN style [Kautz SelmanKautz Selman1992]:

    $\displaystyle \mbox{\tt at}(o,l,t) \wedge \mbox{\tt duration}(l,l',dt) \wedge$  
    $\displaystyle \qquad \qquad \mbox{\tt between}(t,t',t + dt) \rightarrow \neg\mbox{\tt at}(o,l',t')$ (44)

This axiom says that if an object $o$ is at location $l$ at time $t$ and it takes time $dt$ to fly from $l$ to $l'$, and $t'$ is between $t$ and $t+dt$, then $o$ cannot be at $l'$ at $t'$.

A given ground atom of the form $\mbox{\tt at}(o,l,t)$ will appear in $\vert t\vert^2\vert l\vert$ clauses of the above form, where $\vert t\vert$ is the number of time points or increments and $\vert l\vert$ is the number of locations. Even if there are only 100 of each, the $10^6$ axioms created seem likely to make computing $S_u^s(C_l,P)$ impractical.

Let us examine this computation in a bit more detail. Suppose that we do indeed have a variable $a = \mbox{\tt at}(O,L,T)$ for fixed $O$, $L$ and $T$, and that we are interested in counting the number of unit propagations that will be possible if we set $a$ to true. In other words, we want to know how many instances of (44) will be unsatisfied and have a single unvalued literal after we do so.

Existing implementations, faced with this problem (or an analogous one if WSAT or another approach is used), will now consider axioms of the form (44) for $o$, $l$ and $t$ bound and as $l'$, $t'$ and $dt$ are allowed to vary. They examine every axiom of this form and simply count the number of possible unit propagations.

The watched literal idea in isolation cannot help with this problem. If, for example, we watch only the duration and between predicates in (44), we reduce by half the probability that we need to solve a subsearch problem when a particular variable is valued, but in those cases where the problem is encountered, it is as fierce as ever.

The existing approach to solving subsearch problems is taken because existing systems use not quantified clauses such as (44), but the set of ground instances of those clauses. Computing $S_u^s(C,P)$ for ground $C$ involves simply checking each axiom individually; indeed, once the axiom has been replaced by its set of ground instances, no other approach seems possible.

Set against the context of a quantified axiom, however, this seems inappropriate. Computing $S_u^s(C,P)$ for a quantified $C$ by reducing $C$ to a set of ground clauses and then examining each is equivalent to solving the original NP-complete problem by generate and test - and if there is one thing that we can state with confidence about NP-complete problems, it is that generate and test is not in general an effective way to solve them.

Returning to our example with $\mbox{\tt at}(O,L,T)$ true, we are looking for variable bindings for $l'$, $dt$ and $t'$ such that, amongst $\neg\mbox{\tt duration}(L,l',dt)$, $\neg\mbox{\tt between}(T,t',T + dt)$ and $\neg\mbox{\tt at}(O,l',t')$, precisely two of these literals are false and the third is unvalued. Proposition 5.2 suggests that subsearch will be exponentially hard (with respect to the number of quantifiers) in the worst case, but what is it likely to be like in practice?

In practice, things are going to be much better. Suppose that for some possible destination $l'$, we know that $\mbox{\tt duration}(L,l',dt)$ is false for all $dt$ except some specific value $D$. We can immediately ignore all bindings for $dt$ except for $dt=D$, reducing the size of the subsearch space by a factor of $\vert t\vert$. If $D$ depended on previous choices in the search (aircraft loads, etc.), it would be impossible to perform this analysis in advance and thereby remove the unnecessary bindings in the ground theory.

Pushing this example somewhat further, suppose that $D$ is so small that $T+D$ is the time point immediately after $T$. In other words, $\mbox{\tt between}(T,t',T + D)$ will always be false, so that $\neg\mbox{\tt between}(T,t',T + D)$ will always be true and no unit propagation will be possible for any value of $t'$ at all. We can ``backtrack'' away from the unfortunate choice of destination $l'$ in our (sub)search for variable bindings for which unit propagation is possible. Such backtracking is not supported by the generate-and-test subsearch philosophy used by existing implementations.

This sort of computational savings is likely to be possible in general. For naturally occurring theories, most of the variables involved are likely to be either unvalued (because we have not yet managed to determine their truth values) or false (by virtue of the closed-world assumption, Reiter:cwa, if nothing else). Domain constraints will typically be of the form $a_1 \wedge \cdots \wedge
a_k \rightarrow l$, where the premises $a_i$ are variables and the conclusion $l$ is a literal of unknown sign. Unit propagation (or other likely instances of the subsearch problem) will thus involve finding a situation where at most one of the $a_i$ is unvalued, and the rest are true. If we use efficient data structures to identify those instances of relational expressions that are true, it is not unreasonable to expect that most instances of the subsearch problem will be soluble in time polynomial in the length of the clauses involved, as opposed to exponential in that length.

next up previous
Next: Inference and Learning Up: Unit Propagation Previous: Subsearch
Matt Ginsberg 2004-02-19