A device consisting of *n* quantum
bits allows for operations on superpositions of classical states. This ability to operate
simultaneously on an exponentially large number of states with just a
linear number of bits is the basis for quantum parallelism. In
particular, repeating the operation of Eq. 7 *n* times, each on a
different bit, gives a superposition with equal amplitudes in
states.

At first sight quantum computers would seem to be ideal for
combinatorial search problems that are in the class NP. In such
problems, there is an efficient procedure that takes a potential solution set
*s* and determines whether
*s* is in fact a solution, but there
are exponentially many potential solutions, very few of which are in
fact solutions. If are the potential sets to consider, we can quickly
form the superposition and then simultaneously evaluate for all these states, resulting in a superposition of
the sets and their evaluation, i.e., . Here represents a classical search state considering the
set along with a variable
*soln* whose value is true or false
according to the result of evaluating the consistency of the set with
respect to the problem requirements. At this point the quantum computer
has, in a sense, evaluated all possible sets and determined which are
solutions. Unfortunately, if we make a measurement of the system, we get
each set with equal probability and so are very unlikely to observe a solution. This
is thus no better than the slow classical search method of random
generate-and-test where sets are randomly constructed and tested until a
solution is found. Alternatively, we can obtain a solution with high
probability by repeating this operation times, either serially (taking a long time) or with
multiple copies of the device (requiring a large amount of hardware or
energy if, say, the computation is done by using multiple photons). This
shows a trade-off between time and energy (or other physical resources),
conjectured to apply more generally to solving these search
problems [6], and also seen in the
trade-off of time and number of processors in parallel computers.

To be useful for combinatorial search, we can't just evaluate the various sets but instead must arrange for amplitude to be concentrated into the solution sets so as to greatly increase the probability a solution will be observed. Ideally this would be done with a mapping that gives constructive interference of amplitude in solutions and destructive interference in nonsolutions. Designing such maps is complicated by the fact that they must be linear unitary operators as described above. Beyond this physical restriction, there is an algorithmic or computational requirement: the mapping should be efficiently computable [15]. For example, the map cannot require a priori knowledge of the solutions (otherwise constructing the map would require first doing the search). This computational requirement is analogous to the restriction on search heuristics: to be useful, the heuristic itself must not take a long time to compute. These requirements on the mapping trade off against each other. Ideally one would like to find a way to satisfy them all so the map can be computed in polynomial time and give, at worst, polynomially small probability to get a solution if the problem is soluble. One approach is to arrange for constructive interference in solutions while nonsolutions receive random contributions to their amplitude. While such random contributions are not as effective as a complete destructive interference, they are easier to construct and form the basis for a recent factoring algorithm [48] as well as the method presented here.

Classical search algorithms can suggest ways to combine the use of superpositions with interference. These include local repair styles of search where complete assignments are modified, and backtracking search, where solutions are built up incrementally. Using superpositions, many possibilities could be simultaneously considered. However these search methods have no a priori specification of the number of steps required to reach a solution so it is unclear how to determine when enough amplitude might be concentrated into solution states to make a measurement worthwhile. Since the measurement process destroys the superposition, it is not possible to resume the computation at the point where the measurement was made if it does not produce a solution. A more subtle problem arises because different search choices lead to solutions in differing numbers of steps. Thus one would also need to maintain any amplitude already in solution states while the search continues. This is difficult due to the requirement for reversible computations.

While it may be fruitful to investigate these approaches further,
the quantum method proposed below is based instead on a breadth-first
search that incrementally builds up all solutions. Classically, such
methods maintain a list of goods of a given size. At each step, the list
is updated to include all goods with one additional variable. Thus at
step *i*, the list consists of sets of
size *i* which are used to create the
new list of sets of size . For a CSP with *n*
variables, *i* ranges from 0 to
, and after completing these
*n* steps the list will contain all
solutions to the problem. Classically, this is not a useful method for
finding a single solution because the list of partial assignments grows
exponentially with the number of steps taken. A quantum computer, on the
other hand, can handle such lists readily as superpositions. In the
method described below, the superposition at step
*i* consists of all sets of size
*i*, not just consistent ones, i.e.,
the sets at level *i* in the lattice.
There is no question of when to make the final measurement because the
computation requires exactly *n* steps.
Moreover, there is an opportunity to use interference to concentrate
amplitude toward goods. This is done by changing the phase of amplitudes
corresponding to nogoods encountered while moving through the lattice.

As with the division of search methods into a general strategy (e.g., backtrack) and problem specific choices, the quantum mapping described below has a general matrix that corresponds to exploring all possible changes to the partial sets, and a separate, particularly simple, matrix that incorporates information on the problem specific constraints. More complex maps are certainly possible, but this simple decomposition is easier to design and describe. With this decomposition, the difficult part of the quantum mapping is independent of the details of the constraints in a particular problem. This suggests the possibility of implementing a special purpose quantum device to perform the general mapping. The constraints of a specific problem are used only to adjust phases as described below, a comparatively simple operation.

For constraint satisfaction problems, a simple alternative representation to the full lattice structure is to use partial assignments only, i.e., sets of variable-value pairs that have no variable more than once. At first sight this might seem better in that it removes from consideration the necessary nogoods and hence increases the proportion of complete sets that are solutions. However, in this case the number of sets as a function of level in the lattice decreases before reaching the solution level, precluding the simple form of a unitary mapping described below for the quantum search algorithm. Another representation that avoids this problem is to consider assignments in only a single order for the variables (selected randomly or through the use of heuristics). This version of the set lattice has been previously used in theoretical analyses of phase transitions in search [53]. This may be useful to explore further for the quantum search, but is unlikely to be as effective. This is because in a fixed ordering some sets will become nogood only at the last few steps, resulting is less opportunity for interference based on nogoods to focus on solutions.

Wed Mar 20 16:29:17 PST 1996