next up previous
Next: Conclusion Up: Quantification and QPROP Previous: Inference and Learning

Summary

p-simulation unit
rep. eff. hierarchy inference propagation learning
SAT 1 EEE resolution watched literals relevance
cardinality exp P?E not unique watched literals relevance
PB exp P?E unique watched literals + strengthening
symmetry 1 EEE$^*$ not in P same as SAT same as SAT
QPROP exp ??? in P using reasons exp improvement + first-order

As usual, there are a few points to be made.

Finally, we remark that a representation very similar to QPROP has also been used in Answer Set Programming (ASP) [Marek TruszczynskiMarek Truszczynski1999,NiemeläNiemelä1999] under the name ``propositional schemata'' [East TruszczynskiEast Truszczynski2001,East TruszczynskiEast Truszczynski2002]. The approach used in ASP resembles existing satisfiability work, however, in that clauses are always grounded out. The potential advantages of intelligent subsearch are thus not exploited, although we expect that many of the motivations and results given here would also apply in ASP. In fact, ASP has many features in common with SAT:

The most significant difference between conventional satisfiability work and ASP with stable model semantics is that the relevant logic is not classical but the ``logic of here and there'' [PearcePearce1997]. In the logic of here and there, the law of the excluded middle does not hold, only the weaker $\neg p \vee \neg \neg
p$. This is sufficient for DPLL to be applied, but does imply that classical resolution is no longer valid. As a result, there seems to be no proof theory for the resulting system, and learning within this framework is not yet understood. Backjumping is used, but the mechanism does not seem to learn new rules from failed subtrees in the search. In an analogous way, cardinality constraints are used but cutting plane proof systems are not. Despite the many parallels between SAT and ASP, including the approach in this survey seems to be somewhat premature.


next up previous
Next: Conclusion Up: Quantification and QPROP Previous: Inference and Learning
Matt Ginsberg 2004-02-19