Newsgroups: comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!europa.eng.gtefsd.com!newsxfer.itd.umich.edu!gatech!usenet.ufl.edu!zeno.fit.edu!campbell.rhs.brevard.k12.fl.us!scampbel
From: Shelle Campbell <scampbel@rhs.brevard.k12.fl.us>
Subject: Research student needs assistance
Message-ID: <CxxM2x.5LE@zeno.fit.edu>
X-Xxmessage-Id: <AACACA1AB603041D@campbell.rhs.brevard.k12.fl.us>
X-Xxdate: Wed, 19 Oct 94 13:05:30 GMT
Sender: news@zeno.fit.edu (USENET NEWS SYSTEM)
Nntp-Posting-Host: campbell.rhs.brevard.k12.fl.us
Organization: Rockledge High School
X-Useragent: Nuntius v1.1.1d24
Date: Wed, 19 Oct 1994 18:01:44 GMT
Lines: 122

I am a ninth grader from Melbourne, Florida and am doing a ISEF science
project on autonomy applied to robotic structures and three-dimensional
vision aided by neural nets and expert systems.

NOTE: Please take !fingers! to include the thumb unless otherwise specified.

This short essay is in four sections: HELP!, Video Input/Processing, Robot
Arm Mechanics, and Official Description - Please Comment.


HELP!:

. If you have ideas, had experience with any related experiments, or
        actually work with these topics, please feel free to describe
        your achievements to me.
. For the program that extracts 3D information from the images, I!d
        like advice on how to go about it (other than simple explanations
        of how to triangulate the points! positions    I mean deciding
        how the points match up between the views, etc.).
. Suggestions on alternative setups for the !stereoscopic imager! and
        arm/fingers would be welcomed.
. I need advice on elements to be considered for each aspect.


VIDEO INPUT/PROCESSING:

To attain the stereoscopic images I!ll be working with,I have a color
composite video digitization board and am planning to connect it to a color
video camera, with four (relatively) high quality mirrors mounted before it
such that each vertical half of the CCD gets a view of the scene from a left
or right perspective respectively:

?
   *    - object


\\//    - mirrors
 ~~     - video camera
TOP VIEW

I don!t know whether I want the mirrors to be automatically positionable, or
even manually for that matter.  I!m also questioning sensors on the mirrors
to locate their position and orientation, or just to calibrate on some zero. 
Anyway, that will give me a stereoscopic image to work  with.
The software will then need to identify vertices in the image.  To do this a
loop will run through each scanline, placing a marker whenever the average
intensity of surrounding pixels changes by more than some threshold.  The
lateral disparity between each pair of control points will then be computed,
and the scene will be translated into a three-dimensional map.  Then the
classification system will fire up and attempt to break the object(s) down
into basic components and recall and/or learn and store their identity.


ROBOT ARM MECHANICS:

As stated later, I!m building a robot arm with five degrees of freedom and an
index finger and two opposing thumbs.  The thumbs will be like the fingers,
basically a one- or two-jointed fingers mounted on a rotating base.  The arm
will have SHOULDER ROTATE, SHOULDER HINGE, ELBOW, WRIST ROLL and WRIST YAW,
respectively, down its length.  Basic setup is shown below:
?
|  _ _
||O_O_)
| \  `JOINTS
   ROTATING BASE
THUMB
   \|/ - HAND
  _O_ - WRIST YAW
  ___ - WRIST ROLL
 //
  O -ELBOW
   || 
__O__ - SHOULDER HINGE
_____ - SHOULDER ROTATE

The arm!s motors will be steppers from old 10-, 8-, 5 1/2-, and 3-inch disk
drives.
I am contemplating using an old 8085 chip to receive the computer!s commands
and take over the task of actually doling out drive signals to the arm!s
individual motors and buffering the arm!s sensors! responses.  Right now I!m
planning on having only one zero position detector at each joint, and perhaps
for the fingers (and thumb) one- or two-stage lightweight pressure sensors,
and maybe an optical detector pair between each finger.


OFFICIAL DESCRIPTION - PLEASE COMMENT:

        Can a robotic arm be constructed and linked to a computer with a
stereoscopic imaging system and be taught to classify three-dimensional
objects, and if so, what factors affect efficient architecture?
        If a standard industrial/experimental type robot arm, with five
degrees of freedom an an index finger and two opposing thumbs, is
constructed, and a video camera, combined with a image splitter and connected
to a computer with appropriate image processing and neural network/expert
system software, is compiled, then the system will be taught to classify
three-dimensional objects, and the factors affecting the efficiency of the
Architecture can be assessed and quantified.

          1) Have human subjects create objects for training     repeat as
necessary.
          2) Create stereoscopic imaging system.
          3) Create software vision interface.
          4) Create stereo disparity computation filter.
          5) Create neural network/expert system to classify objects.
          6) Create arm.
          7) Create software arm interface.
          8) Create arm controller neural net.
          9) Create master controller to link each subsystem.
        10) Train each net repeatedly, experimenting with manipulation of
each network!s controlling factors     i.e. number of neurons, layer layout,
back propagation factor, learning speed, etc.
        11) Employ human subjects in the evaluation of the systems!
responses! accuracy.
        12) Repeat steps 10 and 11 as necessary.

Please send responses to my instructor via e-mail:
scampbel@rhs.brevard.k12.fl.us

Thank you for your assistance.
Chris Campbell
Rockledge High School
Rockledge, FL  USA
