next up previous
Next: 7.1 Random k-SAT Up: Solving Highly Constrained Search Previous: 6 Solving Maximally Constrained

7 Solving Highly Constrained Random k-SAT 

The discussion of §3.2 shows how a maximum likelihood estimate for tex2html_wrap_inline1852 can be computed for each assignment. This value can then be used to extend the algorithm to arbitrary k-SAT problems. To the extent that the errors introduced by this approximation are small, the quantum algorithm will have a substantial probability to find a solution in a single step.

When averaged over the problem ensemble, the error given by Eq. (27) becomes


where tex2html_wrap_inline2632 is the probability an assignment with y bad values is (incorrectly) determined to have a different number of bad values. In terms of the conditional probabilities of §3.2,


where tex2html_wrap_inline2638 when the maximum likelihood estimate for a state with c conflicts is y (i.e., tex2html_wrap_inline2644 , and 0 otherwise.

For simplicity, these maximum likelihood estimates are determined solely from the number of conflicts in each state. The tex2html_wrap_inline1852 values could be made a bit more accurate by including neighborhood information, as was used for maximally constrained random problems in §6.

Because highly constrained random SAT problems are relatively easy, they have not been well-studied with classical algorithms. Hence, to provide comparison with the quantum search results presented below, these problems were also solved with the classical GSAT procedure [47], limiting each trial to use no more than 2n steps before a new random initial state was selected. For both random and balanced ensembles, the median number of search steps required to find a solution grows linearly over the range of sizes considered here when tex2html_wrap_inline2650 . In particular, while the balanced ensemble has larger search costs, it still grows linearly when there is such a large number of clauses.

Tad Hogg
Feb. 1999