Newsgroups: comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!rochester!jag
From: jag@cs.rochester.edu (Martin Jagersand)
Subject: Re: robot vison solved?
Message-ID: <1994Nov20.042727.21565@cs.rochester.edu>
Organization: University of Rochester Computer Science Department
References: <3ab8ov$f2i@usenet.srv.cis.pitt.edu> <1994Nov16.155759.17647@llyene.jpl.nasa.gov> <3admgr$ein@news.nd.edu>
Date: Sun, 20 Nov 1994 04:27:27 GMT
Lines: 71

In article <3admgr$ein@news.nd.edu>,
johndavid yoder <jyoder@ovid.helios.nd.edu> wrote:
>In article <1994Nov16.155759.17647@llyene.jpl.nasa.gov>, jack@robotics.jpl.nasa.gov (Jack Morrison) writes:
>|> In article f2i@usenet.srv.cis.pitt.edu, gary@cs.pitt.edu (Gary Livingston) writes:
>|> >Actually, what I meant was, is using robot vision to determine distance
>|> >a solved problem.  For us (people), as soon as we look at a 3-d scene, we
>|> >get a  relative distance measure for almost all points in our field of view
>|> >that is fairly accurate.  Can robot vision do this?
>|> 
>|> Not very well. Nor very quickly. Especially if you mean using cameras
>|> rather than something like a laser scanner.
:
:
>
>Well put, Jack.  There has clearly been a lot of effort, most of which seems
>to have succeeded in proving how difficult the problem is, using cameras, 
>at least.
>
>There has also been significant work done using laser rangefinders to 
>produce an "image" of depth information.
>
>Finally, some efforts have worked at bypassing the problem by examining what
>the information is needed FOR.  Early on, a lot of efforts were made to
>use vision to determine position in order to improve the precision/flexibility
>of robots in assembly/manufacturing tasks.  Most such systems required 
>careful and frequent calibration in order to achieve any degree of success.  Some
>of my colleagues here at Notre Dame have developed a means of using vision and 
>robots to effectively eliminate the need for inferred distance measurements
>by setting the task objectives in the image plane.  If you want more
>info, or our WWW homepage address, please email me.

Nice WWW pages! I definitely think visual task specification and robot
control through visual servoing is the waw of the future. It is hard
to do completely without world coordinate modelling though. Consider the
task of positioning a bolt for insertion, and then screwing it in.
The reaching to the hole part of the task is suited to be done with
visual servoing; just point out the image space location of the hole in
say a stereo pair of images. Pick your favorite visual feedback algorithm
out of the literature and let it servo the bolt in over the hole.

However visual space is not the most natural space to describe the turning 
motion used to tighten the bolt into the threads. Much more natural
is to use an object centered coordinate system aligned with the bolt.
We need some world coordinate modelling after all. A big advantage here
is that now this model need only be locally accurate around the hole
where the bolt should go in, instead of having to be valid for the
potentially much larger space of the reach to the hole.

A nice feature of the jacobian based differential visual feedback
algorithms is that the same motor visual jacobian used for the servoing
can be used to create these local object centered coordinate systems.

I'll contribute some more visually controlled manipulation demos. Check
out:

	http://www.cs.rochester.edu/u/jag/PercAct/dvfb.html

or use the menu on my home page:

	http://www.cs.rochester.edu/u/jag

There was a workshop on visual servoing recently, in conjunction with
the latest Robotics and Automation conference. John Feddema had an interesting
essay on how come the industry is not (yet) very excited about visual servoing
type robot control.

-- 
Martin Jagersand                 email: jag@cs.rochester.edu
Computer Science Department             jag@cs.chalmers.se
734 Computer Studies Bldg.       Fax:   (716) 461-2018
University of Rochester          
