Distributed Visual Servoing

David Hershberger and Reid Simmons and Sanjiv Singh and David Kortenkamp1


In construction, it is often the case that the person moving a given piece of material is not in the best place to see where to move it, for example a crane operator moving a beam into place. There must be other people much closer to the location of the workpiece who can give the operator feedback for fine control of the position. This same situation will arise with a team of robots involved in a construction task. Distributed visual servoing is a description of this problem of closing the loop of visually guided robot motion using the eyes of a second robot. Figure 1 shows a moving robot lining up a beam for attachment to a structure using feedback from another robot which is watching the parts being joined.

Figure 1: A moving robot lines up a beam (moving piece) for attachment to a structure (fixed piece) using feedback from a watching robot.


Currently, robotic assembly tasks are mostly confined to the construction of objects smaller than the robot performing the construction. With distributed visual servoing and other robot cooperation and coordination techniques, it will become possible for multiple mobile robots to work together to build structures much larger than any individual robot. In addition, the robot or robots observing the motion may be able to move as the task object moves to keep the best view possible.

State of the Art:

Although visual servoing by single robots is a well-studied field, coordinating visual servoing between multiple robots seems to be something new. Interesting work has been done on optimal camera placement for visual servoing, which may be important for this work.


The software approach being implemented currently is to use a look-then-move visual servoing model, with an extended Kalman Filter for tracking the moving workpiece and its fixed destination. The filter will take the image positions of fiducial marks on the work pieces as input and estimate the 3D poses of the pieces directly. The watching robot will send information to the moving robot in the form of the 3D pose of the fixed (target) object relative to the moving object (held by the mover robot). The mover robot will decide how to move the moving object based on this relative position information. The software for communicating between the robots will be TCA, developed by Reid Simmons.

The hardware for the first demonstration of this project will be the mobile robots Xavier and Amelia, with Amelia being the mover robot, since she has an arm. Communication hardware is radio Ethernet.

Future Work:

This distributed visual servoing capability is being implemented as a component of a larger project to build a distributed architecture for multi-robot coordination. This larger project will develop a software framework for allowing dynamic resource allocation, failure recovery, planning, and coordination of groups of heterogeneous robots. The framework will use the Task Description Language (TDL) for controlling the robots and will integrate with the Skill Manager of Metrica's 3T architecture.

Near term future work on the visual servoing capacity will concentrate on implementation and testing on other robots more appropriate for real construction work in more realistic environments. In the long term, this capability may be important for teams of robots used to build structures on other planets in preparation for human pioneers.

About this document...

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998).
The translation was performed on 1999-02-20.


David Kortenkamp is currently employed at Metrica Inc. in Houston, TX