Probablistic Navigation
Reid Simmons
Most autonomous indoor robots use landmark-based navigation schemes: the robot
moves down corridors until it observes features (such as doors or corridor
junctions) that indicate it should turn or stop. We implemented
landmark-based navigation for Xavier and found it somewhat wanting: the robot
would sometimes make mistakes and get lost.
To remedy those problems, we have been investigating a @i(probabilistic
navigation) scheme, in which a partially observable Markov model is compiled
from a topological map of the environment. The Markov model is used to track
the robot's position: sensor inputs (dead reckoning and feature detectors) are
used to update the probability distribution of Markov states. A path planner
associates actions with each Markov state, and the robot takes the action with
the with the highest total probability mass.
This probabilistic navigation scheme has several advantages over
landmark-based navigation schemes: it is more robust to observation errors
(false positives and negatives), it incorporates metric information in a
natural way, and it can easily utilize additional sensor information to
improve its position estimation capabilities. It also has advantages over
other navigation schemes that represent uncertainty (e.g., using Kalman
filters) because it can represent more general probability distributions.
This talk will discuss the probabilistic navigation method, how we use it to
model space, and our experiments to date. In addition, I will discuss our
ongoing activities in map learning and probabilistic planning that utilize the
Markov representations.