Robot Navigation with Markov Models: A Framework for Path Planning and Learning with Limited Computational Resources

Sven Koenig and Richard Goodwin and Reid G. Simmons,

School of Computer Science, Carnegie Mellon University

Navigation methods for mobile robots need to take various sources of uncertainty into account in order to get robust performance. The ability to improve performance with experience and to adapt to new circumstances is equally important for long-term operation. Real-time constraints, limited computation and memory, as well as the cost of collecting training data also need to be accounted for. In this paper, we discuss our evolving architecture for mobile robot navigation that we use as a test-bed for evaluating methods for dealing with uncertainty in the face of real-time constraints and limited computational resources.

Our architecture is based on POMDP models that explicitly represent actuator uncertainty, sensor uncertainty, and approximate knowledge of the environment (such as uncertain metric information). Using this model, the robot is able to track its likely location as it navigates through a building. In this paper, we discuss additions to the architecture: a learning component that allows the robot to improve the POMDP model from experience, and a decision-theoretic path planner that takes into account the expected performance of the robot as well as probabilistic information about the state of the world. A key aspect of both additions is the efficient allocation of computational resources.

Title Page (1 pages, 5K)

Body Pages (15 pages, 176K)