Visual Position Estimation:
Estimating Position from Outdoor Images
The Basic Idea
This page talks about an application of computer vision to space robotics:
A "smart" teleoperation interface which analyzes images sent by a
mobile robot in space missions and helps the human teleoperator on
Teleoperation of mobile robots is a difficult and stressing task;
it is well-known that remote drivers get lost easily, despite having maps
and visible landmarks.
Our goal is to reduce the cognitive load
on teleoperators by providing cues that help prevent them
getting lost and disoriented.
The figure below contains the basic idea:
the system receives the images from the rover and uses visual cues
and a map of the rover's environment to produce position
estimates that help the operator.
We call the system
We have run VIPER with data obtained in Pittsburgh,
using sequences of images as illustrated above.
VIPER estimates position with errors of less than 100 meters;
such behavior has been observed also with data from Dromedary Peak, Utah.
This page is under construction; contributions are welcome!
More information about VIPER
I have not been able to find much information online about outdoor position
estimation, but if you find something,
please let me know
There are many excellent printed papers about outdoor localization,
about positioning and navigation for space rovers, etc. A (very) brief
sample is given in the references
in our system description.
Fabio Cozman[Send Mail?]
This work has been conducted at the Robotics Institute at the School of
Computer Science, Carnegie Mellon University. It has been partially funded
by NASA; Fabio Cozman has a scholarship from CNPq (Brazil). We thank these
four organizations for all their support.