Visual Servoing with the Marsokhod Robot
Visual Servoing with the Marsokhod Robot

Built by a Russian rover development team consisting of the Russian Academy of Science's Institute for Space Research Institute (IKI), the Babakin Center, and the Mobile Vehicle Engineering Institute (VNIITransMash), Marsokhod was originally designed for a never flown robotic mission to Mars. Marsokhod was intended to follow in the tradition of the Lunakhod rovers which successfully traversed long distances on the Moon. Today it serves as one of the primary robotics testbeds used at the Intelligent Mechanisms Group at NASA Ames Research Center. Recently I used it for some experiments in visual servoing.

Setup:

The visual servoing experiments were carried out on a ProLogic dual pentium PC computer in an Advantech PICMG passive backplane chassis, running RedHat Linux 4.2. Images were acquired with a Pulnix TM-6705AN progressive scan analog CCD camera and a MuTech MV-1000 framegrabber. The camera was mounted on a Directed Perception pan-tilt unit on the left side of the crossbar of the Marsokhod mast.

To visually servo to an object, the controller must know where the object is at all times. This requires visual tracking of the object. In order to visually track the object, we do the following:

Once the visual tracking is robust enough, the robot is ready to visually servo towards the obstacle being tracked. In our case, we were tracking in only one camera frame, so there is no depth information as would be the case with stereo. However, the camera was mounted on the crossbar on the mast, about a meter and a half above the ground. Using a flat-ground assumption, an estimate of distance to the obstacle could be computed. Obviously for outdoor terrains this is a bad approximation, since pitch and roll of the vehicle can change the estimated distance to the object, but the closer the rover gets to the obstacle, the more reasonable the estimate becomes. Inclinometer information could be included, but was not in our experiments. With an estimate of the azimuth and distance to the target, visual servoing is possible.

The visual servoing actually occurs in two ways. First, the pan-tilt unit is servoed so that the feature being tracked stays in the center of the image plane. Second, the robot drives towards the feature.

A simple control method was implemented which attemted to keep the feature centered. A rough calibration of correlation between pixel coordinates and the pan and tilt axis angles was done, and with a deadband about the center of the image plane, the feature location was used in every fifth frame to generate separate pan and tilt commands to reorient the camera so that the feature would be centered.

For driving, steering commands were generated using the distance and azimuth to the object. The distance to the object was estimated as mentioned above. In order to estimate the azimuth to the target object, the angle between the camera axis and the feature must be measured from the feature location in the image plane, and the camera orientation with respect to the robot must be known. Rotating the world in the camera coordinate frame to the robot coordinate frame yields the actual heading to the target object. Using the distance and heading, an arc can be drawn on the ground with initial position and heading given by the current robot state and final position given by the estimated position of the object. Once the correct steering angle for this trajectory is computed, a driving command is sent to the real-time comptuer on-board Marsokhod.

The following images show the tracking in action:


First frame

Sixtieth frame

One hundred fiftieth frame

One hundred fiftieth frame

The experiments were successful, although insufficient testing time was available as my stay in California came to an abrupt stop on August 15th. My colleagues in the IMG continue to work on the visual servoing.

Acknowledgements:

I would like to thank the whole IMG for their support in this effort. Dave, Hans, and Maria wrote much of the code used, Kurt helped me to understand C++, and Dan was always around to help out with hardware or to point out that I was doing something really stupid and that there was a better way. I look forward to continuing to work with them.

This page is maintained by Matthew Deans, a Ph.D. student in the Robotics Institute, part of the School of Computer Science of Carnegie Mellon University.

Comments? Questions? Mail me at deano@ri.cmu.edu

Last Modified September 17, 1997