To motivate the mapping described below, we consider an idealized version. It shows why paths through the lattice tend to interfere destructively for nonsolution states, provided the constraints are small.
The idealized map simply maps each set in the lattice equally to its supersets at the next level, while introducing random phases for sets found to be nogood. For this discussion we are concerned with the relative amplitude in solutions and nogoods so we ignore the overall normalization. Thus for instance, with , the state will map to an unnormalized superposition of its four supersets of size 3, namely the state .
With this mapping, a good at level j will receive equal contribution from each of its j subsets at the prior level. Starting with amplitude of 1 at level 0 then gives an amplitude of for goods at level j. In particular, for solutions.
How does this compare with contribution to nogoods, on average? This will depend on how many of the subsets are nogoods also. A simple case for comparison is when all sets in the lattice are nogood (starting with those at level k given by the size of the constraints, e.g., for problems with binary constraints). Let be the expected value of the magnitude of the amplitude for sets at level j. Each set at level k will have (and a zero phase) because all smaller subsets will be goods. A set s at level will be a sum of j contributions from (nogood) subsets, giving a total contribution of
where the are the subsets of s of size and the phases are randomly selected. The have expected magnitude and some phase that can be combined with to give a new random phase . Ignoring the variation in the magnitude of the amplitudes at each level this gives
because the sum of j random phases is equivalent to an unbiased random walk  with j unit steps which has expected net distance of . Thus or for .
This crude argument gives a rough estimate of the relative probabilities for solutions compared to complete nogoods. Suppose there is only one solution. Then its relative probability is . The nogoods have relative probability with given by Eq. 1. An interesting scaling regime is with fixed b, corresponding to a variety of well-studied constraint satisfaction problems. This gives
This goes to infinity as problems get large so the enhancement of solutions is more than enough to compensate for their rareness among sets at the solution level.
The main limitation of this argument is assuming that all subsets of a nogood are also nogood. For many nogoods, this will not be the case, resulting in less opportunity for cancellation of phases. The worst situation in this respect is when most subsets are goods. This could be because the constraints are large, i.e., they don't rule out states until many items are included. Even with small constraints, this could happen occasionally due to a poor ordering choice for adding items to the sets, hence suggesting that a lattice restricted to assignments in a single order will be much less effective in canceling amplitude in nogoods. For the problems considered here, with small constraints, a large nogood cannot have too many good subsets because to be nogood means a small subset violates a (small) constraint and hence most subsets obtained by removing one element will still contain that bad subset giving a nogood. In fact, some numerical experiments (with the class of unstructured problems described below) show that this mapping is very effective in canceling amplitude in the nogoods. Thus the assumptions made in this simplified argument seem to provide the correct intuitive description of the behavior.
Still the assumption of many nogood subsets underlying the above argument does suggest the extreme cancellation derived above will least apply when the problem has many large partial solutions. This gives a simple explanation for the difficulty encountered with the full map described below at the phase transition point: this transition is associated with problems with relatively many large partial solutions but few complete solutions. Hence we can expect relatively less cancellation of at least some nogoods at the solution level and a lower overall probability to find a solution.
This discussion suggests why a mapping of sets to supersets along with random phases introduced at each inconsistent set can greatly decrease the contribution to nogoods at the solution level. However, this mapping itself is not physically realizable because it is not unitary. For example, the mapping from level 1 to 2 with takes the states to with the matrix
Here, the first column means the state contributes equally to and , its supersets, and gives no contribution to . We see immediately that the columns of this matrix are not orthogonal, though they can be easily normalized by dividing the entries by . More generally, this mapping takes each set at level i to the sets with one more element. The corresponding matrix M has one column for each i-set and one row for each ( )-set. In each column there will be exactly 1's (corresponding to the supersets of the given i-set) and the remaining entries will be 0. Two columns will have at most a single nonzero value in common (and only when the two corresponding i-sets have all but one of their values in common: this is the only way they can share a superset in common). This means that as N gets large, the columns of this matrix are almost orthogonal (provided , the case of interest here). This fact is used below to obtain a unitary matrix that is fairly close to M.