David Hershberger, Reid Simmons, Sanjiv Singh, and David Kortenkamp
In construction, it is often the case that the person moving a given piece of material is not in the best place to see where to move it, for example a crane operator moving a beam into place. There must be other people much closer to the location of the workpiece who can give the operator feedback for fine control of the position. This same situation will arise with a team of robots involved in a construction task. Distributed visual servoing is a description of this problem of closing the loop of visually guided robot motion using the eyes of a second robot. Figure 1 shows a moving robot lining up a beam for attachment to a structure using feedback from another robot which is watching the parts being joined.
Currently, robotic assembly tasks are mostly confined to the construction of objects smaller than the robot performing the construction. With distributed visual servoing and other robot cooperation and coordination techniques, it will become possible for multiple mobile robots to work together to build structures much larger than any individual robot. In addition, the robot or robots observing the motion may be able to move as the task object moves to keep the best view possible.
Although visual servoing by single robots is a well-studied field, coordinating visual servoing between multiple robots seems to be something new. Interesting work has been done on optimal camera placement for visual servoing, which may be important for this work.
The software approach implemented to date uses a look-then-move visual servoing model. The watching robot sends information to the moving robot in the form of the 3D pose of the fixed (target) object relative to the moving object (held by the mover robot). The mover robot decides how to move the moving object based on this relative position information. Communications between robots uses IPC, developed by Reid Simmons.
Work is currently under way to enable 3 robots to cooperate on the construction task: a crane, a mobile manipulator, and a roving eye. The crane provides the heavy lift capability, the mobile manipulator provides fine motion control, and the roving eye provides visual feedback to both.
Hardware currently in use at CMU includes the NIST RoboCrane, Amelia, and a new manipulator arm built by Rob Ambrose at NASA Johnson.
This is being implemented as a component of a larger project to build a distributed architecture for multi-robot coordination. This larger project will develop a software framework for allowing dynamic resource allocation, failure recovery, planning, and coordination of groups of heterogeneous robots. The framework will use the Task Description Language (TDL) for controlling the robots and will integrate with the Skill Manager of Metrica's 3T architecture.
Near term future work on the visual servoing capacity will concentrate on implementation and testing on other robots more appropriate for real construction work in more realistic environments. In the long term, this capability may be important for teams of robots used to build structures on other planets in preparation for human pioneers.
This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)
Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -debug -white -no_fork -no_navigation -address -split 0 -dir html -external_file hersh1 hersh1.tex.
The translation was initiated by Daniel Nikovski on 2000-04-28