Recently, Gerevini and Schubert (1996) have revived the idea that a flaw selection strategy should treat open conditions and threats differently, and have suggested that LIFO should be used as the tie-breaking strategy for deciding among open conditions. They combine these ideas in their ZLIFO strategy:
ZLIFO {n}LIFO / {o}0{LIFO} / {o}1{New} / {o}2-infinityLIFO / {s}LIFO
For open conditions with a repair cost of exactly one, the ZLIFO strategy uses a tie-breaking strategy here called ``New''. It prefers the repair of an open condition that can only be established by introducing a new action over the repair of an open condition that can only be established by using an element of the start state. Gerevini and Schubert state that this preference ``gave improvements in the context of Russell's tire changing domain ...without significant deterioration of performance in other domains'' [p. 104](1996). However the difference was apparently not dramatic, and Gerevini believes this to be an implementation detail, though is open to the possibility that further study might show this preference to be significant [Gerevini, 1997].
Gerevini and Schubert make three primary claims about ZLIFO:
The first two claims are consistent with what we found in the earlier LCFR studies. While a LIFO strategy pays no attention to repair costs, ZLIFO does, at least indirectly, both in its initial preference for nonseparable threats, which have a repair cost of no more than two, and in its secondary preference for forced opens.
The third claim is harder to square with our earlier LCFR study, in which the LIFO-based strategies, such as UCPOP and DUnf, generated much larger search spaces than the least-cost based strategies. What explains ZLIFO's performance? Gerevini and Schubert answer this question as follows:
Based on experience with search processes in AI in general, [a LIFO] strategy has much to recommend it, as a simple default. In the first place, its overhead cost is low compared to strategies that use heuristic evaluation or lookahead to prioritize goals. As well, it will tend to maintain focus on the achievement of a particular higher level goal by regression ...rather than attempting to achieve multiple goals in a breadth-first fashion. [p. 103]Their point about overhead is an important one. ZLIFO is a relatively inexpensive control strategy, and a competing strategy that does a better job of pruning the search space may end up paying excessive overhead. But it is the second point that addresses the question we are asking here, namely, how ZLIFO could produce smaller search spaces. Gerevini and Schubert go on to say that:
[m]aintaining focus on a single goal should be advantageous at least when some of the goals to be achieved are independent. For instance, suppose that two goals G1 and G2 can both be achieved in various ways, but choosing a particular method of achieving G1 does not rule out any of the methods of achieving G2. Then if we maintain focus on G1 until it is solved, before attempting G2, the total cost of solving both goals will just be the sum of the costs of solving them independently. But if we switch back and forth, and solutions of both goals involve searches that encounter many dead ends, the combined cost can be much larger. This is because we will tend to search any unsolvable subtree of the G1 search tree repeatedly, in combination with various alternatives in the G2 search tree .... [p. 103]This is certainly a plausible explanation. A key remaining question, of course, is the extent to which this explanation carries over to the many planning problems that involve interacting goals.