Dieter Fox, Wolfram Burgard, Hannes Kruppa, John Langford, and Sebastian Thrun
Throughout the last decade, sensor-based position estimation has been recognized as a key problem in mobile robotics . The majority of existing work has focused on tracking the position of a single robot . Here a robot knows its initial position and ``only'' has to accommodate small errors in its odometry as it moves. Recently, several methods have been developed that solve the more difficult global localization problem, that of estimating a robot's position from scratch (see e.g. [7,2]). Our approach to collaborative robot localization addresses the problem of globally localizing multiple robots operating in the same environment. The key question in this context is how to combine the sensor information collected by the different robots in an efficient and probabilistically sound way.
Efficient task achievement by multiple, collaborating robots becomes more and more important. Our approach allows to combine sensor information collected on different robotic platforms, thereby enhancing the localization performance of each individual robot. The importance of exchanging information during localization is particularly striking for heterogeneous robot teams. Consider, for example, a robot team where some robots are equipped with expensive, high-accuracy sensors (such as laser range-finders), whereas others are only equipped with low-cost sensors such as sonar sensors. Here, collaborative multi-robot localization facilitates the amortization of high-end high-accuracy sensors across teams of robots.
Almost all existing approaches to mobile robot localization address single-robot localization only. Moreover, the majority of approaches is incapable of localizing a robot globally; instead, they are designed to track the robot's position by compensating small odometric errors. Thus, they differ from the approach described here in that they require knowledge of the robot's initial position; and they are not able to make use of the additional information available when localizing multiple robots at the same time.
We propose an efficient probabilistic approach for collaborative multi-robot localization . Our approach is based on Markov localization, a family of probabilistic approaches that have recently been applied with great practical success to single-robot localization. In contrast to previous research, which relied on grid-based or coarse-grained topological representations of a robot's state space, our approach adopts a sampling-based representation [4,5], which is capable of approximating a wide range of belief functions in real-time. To transfer information across different robotic platforms, probabilistic ``detection models'' are employed to model the robots' abilities to recognize each other. When one robot detects another, these detection models are used to synchronize the individual robots' beliefs, thereby reducing the uncertainty of both robots during localization. To accommodate the noise and ambiguity arising in real-world domains, detection models are probabilistic, capturing the reliability and accuracy of robot detection. The constraint propagation is implemented using sampling, and density trees are employed to integrate information from other robots into a robot's belief (see Figure 1 for an example).
While our approach is applicable to any sensor capable of (occasionally) detecting other robots, we present an implementation that uses color cameras and laser range-finders for robot detection. The parameters of the corresponding probabilistic detection model are learned using a maximum likelihood estimator. Experimental results, carried out in real and simulated environments, demonstrate that our approach can reduce the uncertainty in localization significantly, when compared to conventional single-robot localization. In one of the experiments we showed that under certain conditions, successful localization is only possible if teams of heterogeneous robots collaborate during localization.
Our current implementation of multi-robot localization only updates the belief of the detected robot (Robin in Figure 1). Preliminary experiments indicate that using an observation to constrain the belief of the detecting robot (Marian in Figure 1) can significantly increase the performance of the approach. Furthermore, the current approach applies a simple heuristic in order to minimize the effect of using the same evidence several times (see  for details). By keeping separate parallel belief states for each robot detection, we can model the independence between sensor measurements in a probabilistically sound way.
This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)
Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -debug -white -no_fork -no_navigation -address -split 0 -dir html -external_file dfoxsm dfoxsm.tex.
The translation was initiated by Daniel Nikovski on 2000-04-28