next up previous
Next: A Search Algorithm Up: Quantum Search Methods Previous: Some Approaches to

Motivation

To motivate the mapping described below, we consider an idealized version. It shows why paths through the lattice tend to interfere destructively for nonsolution states, provided the constraints are small.

The idealized map simply maps each set in the lattice equally to its supersets at the next level, while introducing random phases for sets found to be nogood. For this discussion we are concerned with the relative amplitude in solutions and nogoods so we ignore the overall normalization. Thus for instance, with tex2html_wrap2670 , the state tex2html_wrap2671 will map to an unnormalized superposition of its four supersets of size 3, namely the state tex2html_wrap2672 .

With this mapping, a good at level j will receive equal contribution from each of its j subsets at the prior level. Starting with amplitude of 1 at level 0 then gives an amplitude of tex2html_wrap2673 for goods at level j. In particular, tex2html_wrap2674 for solutions.

How does this compare with contribution to nogoods, on average? This will depend on how many of the subsets are nogoods also. A simple case for comparison is when all sets in the lattice are nogood (starting with those at level k given by the size of the constraints, e.g., tex2html_wrap2675 for problems with binary constraints). Let tex2html_wrap2676 be the expected value of the magnitude of the amplitude for sets at level j. Each set at level k will have tex2html_wrap2677 (and a zero phase) because all smaller subsets will be goods. A set s at level tex2html_wrap2678 will be a sum of j contributions from (nogood) subsets, giving a total contribution of

equation488

where the tex2html_wrap2679 are the subsets of s of size tex2html_wrap2680 and the phases tex2html_wrap2681 are randomly selected. The tex2html_wrap2682 have expected magnitude tex2html_wrap2683 and some phase that can be combined with tex2html_wrap2681 to give a new random phase tex2html_wrap2685 . Ignoring the variation in the magnitude of the amplitudes at each level this gives

equation502

because the sum of j random phases is equivalent to an unbiased random walk [30] with j unit steps which has expected net distance of tex2html_wrap2686 . Thus tex2html_wrap2687 or tex2html_wrap2688 for tex2html_wrap2678 .

This crude argument gives a rough estimate of the relative probabilities for solutions compared to complete nogoods. Suppose there is only one solution. Then its relative probability is tex2html_wrap2690 . The nogoods have relative probability tex2html_wrap2691 with tex2html_wrap2692 given by Eq. 1. An interesting scaling regime is tex2html_wrap2693 with fixed b, corresponding to a variety of well-studied constraint satisfaction problems. This gives

equation529

This goes to infinity as problems get large so the enhancement of solutions is more than enough to compensate for their rareness among sets at the solution level.

The main limitation of this argument is assuming that all subsets of a nogood are also nogood. For many nogoods, this will not be the case, resulting in less opportunity for cancellation of phases. The worst situation in this respect is when most subsets are goods. This could be because the constraints are large, i.e., they don't rule out states until many items are included. Even with small constraints, this could happen occasionally due to a poor ordering choice for adding items to the sets, hence suggesting that a lattice restricted to assignments in a single order will be much less effective in canceling amplitude in nogoods. For the problems considered here, with small constraints, a large nogood cannot have too many good subsets because to be nogood means a small subset violates a (small) constraint and hence most subsets obtained by removing one element will still contain that bad subset giving a nogood. In fact, some numerical experiments (with the class of unstructured problems described below) show that this mapping is very effective in canceling amplitude in the nogoods. Thus the assumptions made in this simplified argument seem to provide the correct intuitive description of the behavior.

Still the assumption of many nogood subsets underlying the above argument does suggest the extreme cancellation derived above will least apply when the problem has many large partial solutions. This gives a simple explanation for the difficulty encountered with the full map described below at the phase transition point: this transition is associated with problems with relatively many large partial solutions but few complete solutions. Hence we can expect relatively less cancellation of at least some nogoods at the solution level and a lower overall probability to find a solution.

This discussion suggests why a mapping of sets to supersets along with random phases introduced at each inconsistent set can greatly decrease the contribution to nogoods at the solution level. However, this mapping itself is not physically realizable because it is not unitary. For example, the mapping from level 1 to 2 with tex2html_wrap2694 takes the states tex2html_wrap2695 to tex2html_wrap2696 with the matrix

  equation546

Here, the first column means the state tex2html_wrap2697 contributes equally to tex2html_wrap2671 and tex2html_wrap2699 , its supersets, and gives no contribution to tex2html_wrap2700 . We see immediately that the columns of this matrix are not orthogonal, though they can be easily normalized by dividing the entries by tex2html_wrap2701 . More generally, this mapping takes each set at level i to the tex2html_wrap2702 sets with one more element. The corresponding matrix M has one column for each i-set and one row for each ( tex2html_wrap2655 )-set. In each column there will be exactly tex2html_wrap2702 1's (corresponding to the supersets of the given i-set) and the remaining entries will be 0. Two columns will have at most a single nonzero value in common (and only when the two corresponding i-sets have all but one of their values in common: this is the only way they can share a superset in common). This means that as N gets large, the columns of this matrix are almost orthogonal (provided tex2html_wrap2705 , the case of interest here). This fact is used below to obtain a unitary matrix that is fairly close to M.


next up previous
Next: A Search Algorithm Up: Quantum Search Methods Previous: Some Approaches to

Tad Hogg
Wed Mar 20 16:29:17 PST 1996