next up previous
Next: Boolean Satisfiability Up: Generalizing Boolean Satisfiability I: Previous: Generalizing Boolean Satisfiability I:

Introduction

This is the first of a planned series of three papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers such as ZCHAFF [Moskewicz, Madigan, Zhao, Zhang, MalikMoskewicz et al.2001].1 Many Boolean satisfiability problems incorporate a rich structure that reflects properties of the domain from which the problems themselves arise, and ZAP includes a representation language that allows this structure to be described and a proof engine that exploits the structure to improve performance. This first paper describes the work on which ZAP itself is based, and is intended to be a survey of existing results that attempt to use problem structure to improve the performance of satisfiability engines. The results we discuss include generalizations from Boolean satisfiability to include cardinality constraints, pseudo-Boolean representations, symmetry, and a limited form of quantification. The second paper in this series [Dixon, Ginsberg, Luks, ParkesDixon et al.2003b] describes the theoretical generalization that subsumes and extends these ideas, and the third paper [Dixon, Ginsberg, Hofer, Luks, ParkesDixon et al.2003a] describes the ZAP system itself.

Our intention is to review work on satisfiability from the introduction of the Davis-Putnam-Logemann-Loveland algorithm [Davis, Logemann, LovelandDavis et al.1962] to the present day. Not all of this work (thankfully), but only that portion of the work that can be thought of as an attempt to improve the performance of systematic methods by exploiting the general structure of the problem in question.

We therefore include a description of recent work extending the language of Boolean satisfiability to include a restricted form of quantification [Ginsberg ParkesGinsberg Parkes2000] or pseudo-Boolean constraints [BarthBarth1995,BarthBarth1996,Chandru HookerChandru Hooker1999,Dixon GinsbergDixon Ginsberg2000,HookerHooker1988,Nemhauser WolseyNemhauser Wolsey1988]; in each case, the representational extension corresponds to the existence of structure that is hidden by the Boolean axiomatization. We discuss at length the interplay between the desire to speed search by exploiting structure and the danger of slowing search by using unwieldy representations. Somewhat surprisingly, we will see that the most effective representational extensions appear to incur little or no overhead as a result of implementation concerns. It was this observation - that better representations can have better implementations - that led us to search for the general sort of structure that ZAP exploits.

We will also discuss the attempts that have been made to exploit the symmetrical structure of some satisfiability problems [Brown, Finkelstein, Paul Walton PurdomBrown et al.1988,CrawfordCrawford1992,Crawford, Ginsberg, Luks, RoyCrawford et al.1996,Joslin RoyJoslin Roy1997,KrishnamurthyKrishnamurthy1985,PugetPuget1993]. This work appears to have had only a modest impact on the development of satisfiability engines generally, and we explain why: Most authors [CrawfordCrawford1992,Crawford, Ginsberg, Luks, RoyCrawford et al.1996,Joslin RoyJoslin Roy1997,KrishnamurthyKrishnamurthy1985,PugetPuget1993] exploit only global symmetries, and such symmetries are vanishingly rare in naturally occurring problems. The methods that have been described for exploiting local or emergent symmetries [Brown, Finkelstein, Paul Walton PurdomBrown et al.1988,SzeiderSzeider2003] incur unacceptable computational overhead at each node of the search. Our general arguments regarding the interplay between representation and search suggest that one should identify local symmetries when a problem is formulated, and then exploit those symmetries throughout the search.

We will not discuss heuristic search or any of the substantial literature relating to it. To our collective eye, just as a Boolean axiomatization can obscure the natural structure in a search problem, so can heuristics. We have argued elsewhere [Ginsberg GeddisGinsberg Geddis1991] that domain-dependent search control rules can never be more than a poor man's standin for general principles based on problem structure. Our selection of survey material reflects this bias.

We have remarked that this entry in the ZAP series is a survey paper; to the extent that there is a research contribution, it is in the overall (and, we believe, novel) focus we are taking. Our basic view is that the target of new work in this area should not be a specific representational extension such as a pseudo-Boolean or first-order encoding, but a direct exploitation of the underlying problem structure. This paper is original in describing existing work in this way, for the first time viewing the first-order and pseudo-Boolean extensions purely as structure-exploitation techniques. First-order and pseudo-Boolean representations are effective not because of their history of usefulness, but because - and to our mind only because - they allow the identification and capture of two particular types of structure inherent in many classes of problems. It is our hope that the reader views the material we are presenting here (and in the companion papers as well) in this light: Recent progress in Boolean satisfiability is best thought of in terms of structure exploitation. That is the perspective with which we approach this paper, and we hope that you, the reader, will come to share it.


But let us return to the Davis-Putnam-Logemann-Loveland procedure itself [Davis, Logemann, LovelandDavis et al.1962], which appears to have begun the body of work on the development of solvers that are sufficiently powerful to be used in practice on a wide range of problems. Descendants of the DPLL algorithm are now the solution method of choice on many such problems including microprocessor testing and verification [Biere, Clarke, Raimi, ZhuBiere et al.1999,Copty, Fix, Giunchiglia, Kamhi, Tacchella, VardiCopty et al.2001,Velev BryantVelev Bryant2001], and are competitive in other domains such as planning [Kautz SelmanKautz Selman1992].

We will return to the pure algorithmics of DPLL and its successors shortly, but let us begin by noting that in spite of impressive engineering successes on many difficult problems, there are many easy problems with which Boolean satisfiability engines struggle. These include problems involving parity, the well known ``pigeonhole problem'' (stating that you cannot put $n+1$ pigeons into $n$ holes if each pigeon needs its own hole), and problems that are described most naturally using a first-order as opposed to a ground representation.

In all of these cases, there is structure to the problem being solved and this structure is lost when the ground encoding is built. While it is a testament to the power of Boolean methods that they can solve large and difficult problems without access to this underlying structure, it seems reasonable to expect that incorporating and using the structure would improve performance further. Our survey of techniques that improve DPLL will suggest that this is in fact the case, since all of the techniques that underpin the performance of modern Boolean satisfiability engines can be well understood in this way.

Before turning to the details of this analysis, however, let us flesh out our framework a bit more. If any computational procedure is to have its performance improved, there seem to be only three ways in which this can be done:

  1. Replace the algorithm.
  2. Reduce the time spent on a single iteration of the inner loop.
  3. Reduce the number of times the inner loop must be executed.
The first approach is not our focus here; while there are many potential competitors to DPLL, none of them seems to outperform DPLL in practice.2 Work on the second approach has historically focused on reducing the amount of time spent in unit propagation, which does indeed appear to represent the inner loop of most DPLL implementations.

There are a variety of techniques available for reducing the number of calls to the inner loop, which we will divide as follows:

3a.
Algorithmic improvements that do not require representational changes.
3b.
Algorithmic improvements requiring representational changes.
Attempts to improve DPLL without changing the underlying Boolean representation have focused on (i) the development of a mechanism for retaining information developed in one portion of the search for subsequent use (caching or, more commonly, learning) and (ii) the development of search heuristics that can be expected to reduce the size of the space being examined. In all such work, however, using DPLL to analyze problems where no solution exists is equivalent to building a resolution proof of the unsatisfiability of the underlying theory, so that the number of inference steps is bounded by the number of inferences in the shortest such proof [MitchellMitchell1998]. These lengths are generally exponential in problem size in the worst case.

Work involving representation change can overcome this difficulty by reducing the length of the proof that DPLL or a similar algorithm is implicitly trying to construct. Representations are sought for which certain problems known to require proofs of exponential length in the Boolean case admit proofs of polynomial length after the representational shift is made. This leads to a hierarchy of representational choices, where one representation $r_1$ is said to polynomially simulate or $p$-simulate another $r_2$ if proofs using the representation $r_2$ can be converted to proofs using $r_1$ in polynomial time. In general, representations that lead to efficient proofs do so via more efficient encodings, so that a single axiom in the improved representation corresponds to exponentially many in the original. There are many excellent surveys of the proof complexity literature [Beame PitassiBeame Pitassi2001,PitassiPitassi2002,UrquhartUrquhart1995], and we will generally not repeat that material here.

Of course, it is not sufficient to simply improve the epistemological adequacy of a proof system; its heuristic adequacy must be maintained or improved as well [McCarthyMcCarthy1977]. We therefore assume that any representation introduced for the purposes of navigating the $p$-simulation hierarchy also preserves the basic inference mechanism of DPLL (resolution) and maintains, and ideally builds upon, the improvements made in propagation performance (2) and learning (3a). It is in this context that our survey shall consider these representational changes.

The representations that we will consider are the following:

Given the arguments regarding heuristic adequacy generally, our goal in this survey is to complete the following table:

representational p-simulation unit
efficiency hierarchy inference propagation learning
SAT
cardinality
pseudo-Boolean
symmetry
QPROP

The first column simply names the representational system in question. Then for each, we describe:

We cannot overstress the fact that no single column in our table is more important than the others. A reduction in proof length is of little practical value if there is no associated reduction in the amount of computation time required to find a proof. Speeding the time needed for a single inference is hardly useful if the number of inferences required grows exponentially.

The Boolean satisfiability community has pioneered many techniques used to reduce per-inference running time and to understand learning in a declarative setting. In spite of the fact that resolution and Boolean satisfiability are among the weakest inference systems in terms of representational efficiency and their position in the $p$-simulation hierarchy [PudlakPudlak1997], almost none of the more powerful proof systems is in wide computational use. Finding proofs in these more sophisticated settings is difficult; even direct attempts to lift DPLL to a first-order setting [BaumgartnerBaumgartner2000] seem fraught with complexity and an inability to exploit recent ideas that have led to substantial improvements in algorithmic performance. The most usable proof systems are often not the theoretically most powerful.

This paper is organized along the rows of the table we are trying to complete. Boolean techniques are described in the next section; we recount the demonstration that the pigeonhole problem is exponentially difficult in this setting [HakenHaken1985]. We go on in Section 3 to discuss cardinality constraints and pseudo-Boolean methods, showing that the earlier difficulties with the pigeonhole problem can be overcome using either method but that similar issues remain in other cases. Following the descriptions of implemented pseudo-Boolean reasoning systems [BarthBarth1995,Dixon GinsbergDixon Ginsberg2000], we show that the key computational ideas from the Boolean case continue to be applicable in a pseudo-Boolean setting.

Axiomatizations that attempt to exploit symmetry directly are discussed in Section 4. We draw a distinction between approaches that require the existence of global symmetries, which tend not to exist in practice, and those that use only local ones, which exist but are difficult to find as inference proceeds.

In Section 5, we discuss axiomatizations that are Boolean descriptions of problems that are more naturally represented using quantified axioms. We discuss the problems arising from the fact that the ground theories tend to be exponentially larger than their lifted counterparts, and show that working with the first-order axiomatization directly can lead to large improvements in the efficiency of the overall system [Ginsberg ParkesGinsberg Parkes2000].

Concluding remarks are contained in Section 6.


next up previous
Next: Boolean Satisfiability Up: Generalizing Boolean Satisfiability I: Previous: Generalizing Boolean Satisfiability I:
Matt Ginsberg 2004-02-19