Anytime Search in Large, Dynamic Graphs
Maxim Likhachev*, Dave Ferguson**, Geoff Gordon***, Anthony Stentz***, and Sebastian Thrun****
*University of Pennsylvania, **Intel Research Pittsburgh, ***Carnegie Mellon University, ****Stanford University
Agents operating in the real world often have limited time available for planning their
next actions. Producing optimal plans is infeasible in these scenarios. Instead, agents must
be satisfied with the best plans they can generate within the time available. One class of
planners well-suited to this task are anytime planners, which quickly find an initial, highly
suboptimal plan, and then improve this plan until time runs out.
A second challenge associated with planning in the real world is that models are usually imperfect and environments are often dynamic. Thus, agents need to update their models and consequently plan over time. Incremental planners, which make use of the results of previous planning efforts to generate a new plan, can substantially speed up each planning episode in such cases.
In this paper, we present an A*-based anytime search algorithm that produces significantly better solutions than current approaches, while also providing suboptimality bounds on the quality of the solution at any point in time. We also present an extension of this algorithm that is both anytime and incremental. This extension improves its current solution while deliberation time allows and is able to incrementally repair its solution when changes to the world model occur. We provide a number of theoretical and experimental results and demonstrate the effectiveness of the approaches in a robot navigation domain involving two physical systems. We believe that the simplicity, theoretical properties, and generality of the presented methods make them well suited to a range of search problems involving large, dynamic graphs.