In this paper we presented a metric variant of Markov localization, as a robust technique for estimating the position of a mobile robot in dynamic environments. The key idea of Markov localization is to maintain a probability density over the whole state space of the robot relative to its environment. This density is updated whenever new sensory input is received and whenever the robot moves. Metric Markov localization represents the state space using fine-grained, metric grids. Our approach employs efficient, selective update algorithms to update the robot's belief in real-time. It uses filtering to cope with dynamic environments, making our approach applicable to a wide range of target applications.
In contrast to previous approaches to Markov localization, our method uses a fine-grained discretization of the state space. This allows us to compute accurate position estimates and to incorporate raw sensory input into the belief. As a result, our system can exploit arbitrary features of the environment. Additionally, our approach can be applied in arbitrary unstructured environments and does not rely on an orthogonality assumption or similar assumptions of the existence of certain landmarks, as most other approaches to Markov localization do.
The majority of the localization approaches developed so far assume that the world is static and that the state of the robot is the only changing aspect of the world. To be able to localize a mobile robot even in dynamic and densely populated environments, we developed a technique for filtering sensor measurements which are corrupted due to the presence of people or other objects not contained in the robot's model of the environment.
To efficiently update the huge state spaces resulting from the grid-based discretization, we developed two different techniques. First, we use look-up operations to efficiently compute the quantities necessary to update the belief of the robot given new sensory input. Second, we apply the selective update scheme which focuses the computation on the relevant parts of the state space. As a result, even large belief states can be updated in real-time.
Our technique has been implemented and evaluated in several real-world experiments at different sites. Recently we deployed the mobile robots Rhino in the Deutsches Museum Bonn, Germany, and Minerva in the Smithsonian's National Museum of American History, Washington, DC, as interactive museum tour-guides. During these deployments, our Markov localization technique reliably estimated the position of the robots over long periods of time, despite the fact that both robots were permanently surrounded by visitors which produced large amounts of false readings for the proximity sensors of the robots. The accuracy of grid-based Markov localization turned out to be crucial to avoid even such obstacles that could not be sensed by the robot's sensors. This has been accomplished by integrating map information into the collision avoidance system [Fox et al. 1998b].
Despite these encouraging results, several aspects warrant future research. A key disadvantage of our current implementation of Markov localization lies in the fixed discretization of the state space, which is always kept in main memory. To scale up to truly large environments, it seems inevitable that one needs variable-resolution representations of the state space, such as as the one suggested in [Burgard et al. 1997, Burgard et al. 1998b, Gutmann et al. 1998]. Alternatively, one could use Monte-Carlo based representations of the state space as described in [Fox et al. 1999]. Here, the robot's belief is represented by samples that concentrate on the most likely parts of the state space.