next up previous
Next: The Entropy Filter Up: Metric Markov Localization for Previous: The Perception Model for

Filtering Techniques for Dynamic Environments

 

Markov localization has been shown to be robust to occasional changes of an environment such as opened / closed doors or people walking by. Unfortunately, it fails to localize a robot if too many aspects of the environment are not covered by the world model. This is the case, for example, in densely crowded environments, where groups of people cover the robots sensors and thus lead to many unexpected measurements. The mobile robots Rhino and Minerva, which were deployed as interactive museum tour-guides [Burgard et al. 1998a, Burgard et al. 2000, Thrun et al. 1999], were permanently faced with such a situation. Figure 7 shows two cases in which the robot Rhino is surrounded by many visitors while giving a tour in the Deutsches Museum Bonn, Germany.

   figure773

The reason why Markov localization fails in such situations is the violation of the Markov assumption, an independence assumption on which virtually all localization techniques are based. As discussed in Section 2.3, this assumption states that the sensor measurements observed at time t are independent of all other measurements, given that the current state tex2html_wrap_inline2849 of the world is known. In the case of localization in densely populated environments, this independence assumption is clearly violated when using a static model of the world.

To illustrate this point, Figure 8 depicts two typical laser scans obtained during the museum projects (maximal range measurements are omitted). The figure also shows the obstacles contained in the map. Obviously, the readings are, to a large extent, corrupted, since people in the museum are not represented in the static world model. The different shading of the beams indicates the two classes they belong to: the black lines correspond to the static obstacles in the map and are independent of each other if the position of the robot is known. The grey-shaded lines are beams reflected by visitors in the Museum. These sensor beams cannot be predicted by the world model and therefore are not independent of each other. Since the vicinity of people usually increases the robot's belief of being close to modeled obstacles, the robot quickly loses track of its position when incorporating all sensor measurements. To reestablish the independence of sensor measurements we could include the position of the robot and the position of people into the state variable L. Unfortunately, this is infeasible since the computational complexity of state estimation increases exponentially in the number of dependent state variables to be estimated.

A closely related solution to this problem could be to adapt the map according to the changes of the environment. Techniques for concurrent map-building and localization such as [Lu & Milios1997, Gutmann & Schlegel1996, Shatkey & Kaelbling1997, Thrun et al. 1998b], however, also assume that the environment is almost static and therefore are unable to deal with such environments. Another approach would be to adapt the perception model to correctly reflect such situations. Note that our perceptual model already assigns a certain probability to events where the sensor beam is reflected by an unknown obstacle. Unfortunately, such approaches are only capable to model such noise on average. While such approaches turn out to work reliably with occasional sensor blockage, they are not sufficient in situations where more than fifty percent of the sensor measurements are corrupted. Our localization system therefore includes filters which are designed to detect whether a certain sensor reading is corrupted or not. Compared to a modification of the static sensor model described above, these filters have the advantage that they do not average over all possible situations and that their decision is based on the current belief of the robot.

The filters are designed to select those readings of a complete scan which do not come from objects contained in the map. In this section we introduce two different kinds of filters. The first one is called entropy filter. Since it filters a reading based solely on its effect on the belief Bel(L), it can be applied to arbitrary sensors. The second filter is the distance filter which selects the readings according to how much shorter they are than the expected value. It therefore is especially designed for proximity sensors.




next up previous
Next: The Entropy Filter Up: Metric Markov Localization for Previous: The Perception Model for

Dieter Fox
Fri Nov 19 14:29:33 MET 1999