next up previous
Next: Related Work and Basic Up: Position Estimation from Outdoor Previous: Abstract


Teleoperation of mobile robots is a difficult and stressing task; it is well-known that remote drivers get lost easily, despite having maps, and that they quickly become disoriented [1, 5, 8]. One of the key factors contributing to the difficulty is that the teleoperator must simultaneously perform the functions of pilot and navigator; the difficulty of performing these dual duties is familiar to anyone who has gotten lost while driving in an unknown city. Teleoperating a rover presents further challenges when it is driving on the Moon [6], because of the 5-second round-trip communication delay [9], coupled with the unfamiliarity of the environment, including less gravity, unique terrain topography, and variable surface properties.

This paper presents a new application of computer vision that assists operators driving remote vehicles, particularly over long-delay links. Our goal is to reduce the cognitive load on teleoperators by providing cues that help prevent getting lost and disoriented. The basic idea is to offload navigation functions, permitting the remote driver to concentrate on pilot functions.

Our approach (Figure 1) assumes the existence of an elevation map of the imaged area, and assumes that the initial position of the robot is unknown, but constrained to lie in a region the size of the map. The approach follows three steps:

  1. Analyze images sent by the remote mobile robot. The system finds structures in the images that can be found in the map; currently, we focus on mountain peaks as the map features.
  2. Estimate position based on the images. The system matches the structures in the images with topographic structures in maps, and uses a probabilistic approach to generate position estimates.
  3. Overlay position information on the rover-acquired images. This overlay display resembles ``augmented reality'' systems for training and medical applications [3].



Figure 1: An Interface for Teleoperation of Mobile Robots

Figure 2 illustrates the current interface for remote driving. The Map Window (top left) presents maps of the environment, and indicates the rover location. The Image Window (bottom) displays images from the rover, together with information about the position of mountains and peaks. The View Window (top right) shows artificially generated views of the rover's environment.



Figure 2: Interface Displays: Map (top left), Synthetic View (top right), Image

We use 7.5 minute Digital Elevation Maps (DEMs) provided by the United States Geographical Survey (USGS). A typical 7.5 minute DEM covers 10km by 14km (140km2), containing some 2105 elevation values, recorded every 30 meters. For lunar applications, we will use maps produced by the Apollo missions. The map in Figure 2 covers approximately 37 km2, about half of the city of Pittsburgh. The gray dot in the middle of the map indicates where the image in the Image Window was taken. The black dots indicate where peaks have been identfied by a preprocessing stage, which automatically detects local maxima in the 7.5 minute DEM; more sophisticated methods are used by Thompson [15].

This paper is organized as follows. In the next section we discuss related work. In Section 3 we present a vision-based mountain peak detector, and in Section 4 we present a maximum likelihood solution to the position estimation problem. Then, we describe experimental results with real data obtained in the Pittsburgh East and Dromedary Peak USGS quadrangles, reporting improvements in speed and accuracy relative to previous approaches.

next up previous
Next: Related Work and Basic Up: Position Estimation from Outdoor Previous: Abstract

Fabio G. Cozman