next up previous
Next: Localization from Outdoor Imagery Up: Automatic Mountain Detection and Previous: Automatic Mountain Detection and


Only with great difficulty can human operators teleoperate rovers in an unfamiliar environment based solely on imagery sent by the rover, even with maps of the rover's environment [1, 6, 9]. Teleoperating a rover presents further challenges when the rover is on a lunar mission [7], because of the 5-second round-trip communication delay [11], coupled with the unfamiliarity of the environment, less gravity, and variable surface properties. For example, astronauts in the Apollo missions had great difficulty determining distances from mountains and craters [5].

This paper presents a system that assists operators driving remote vehicles. The basic idea is to offload navigation functions, permitting the remote driver to concentrate on pilot functions without getting disoriented or lost. Figure 1 summarizes the idea. The operator observes images from the rover and looks at a topographic map of the imaged area. Position of the robot is unknown but constrained to lie in a region the size of the map. The images are analyzed; structures found in the map are marked. In this paper we report on an automatic detector of mountains and a position estimator that operates from the detected peaks. The ultimate goal is to overlay position information on the maps and on rover-acquired images, just as ``augmented reality'' systems for training and medical applications do [3].



Figure 1: An interface for teleoperation of mobile robots

The interface presents three windows to the operator. The first window carries video; we currently use a standard video display in a Silicon Graphics workstation to look at our footage. The second window displays panoramas formed from selected images, and indicates the results of the mountain detector. The third window, depicted in Figure 2, carries the map information. The map can be seen from above, as displayed, or rendered as seen from the ground. The map in Figure 2 shows the topography of the Apollo 17 site on the Moon, generated from the Apollo 17 Landing Area topophotomap [10].

Figure 2: The map window, with a top view of the Apollo 17 topographic map (North and South Massifs appear on top and bottom respectively)

The next section discusses the basic requirements of our system; we then present our vision-based mountain detector and results collected on terrestrial and lunar data. The position estimator is then described, together with results obtained for data in the Pittsburgh East and Dromedary Peak USGS quadrangles. We show the most complete set of test images to date and we report improvements in speed and accuracy relative to previous approaches. The implemented system achieves better estimation performance than any competing method due to our quantitative approach and better time performance due to our pre-compilation of relevant data.

next up previous
Next: Localization from Outdoor Imagery Up: Automatic Mountain Detection and Previous: Automatic Mountain Detection and

© Fabio Cozman[Send Mail?]

Tue Jun 24 00:46:56 EDT 1997