Date: 14 Nov 90 09:39:46-PST
From: Vision-List moderator Phil Kahn <Vision-List-Request@ADS.COM>
Errors-to: Vision-List-Request@ADS.COM
Reply-to: Vision-List@ADS.COM
Subject: Vision-List digest delayed redistribution
To: Vision-List@ADS.COM

Vision-List Digest	Wed Nov 14 09:39:46 PDT 90

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 Re:  Shape from Shading
 Motion Detection Using Neural Nets
 Making personal bibliographies public
 MPEG/JPEG images, software implementations
 Video image sequences
 Colour ref.

----------------------------------------------------------------------

Date: Mon, 12 Nov 90 22:26:11 EST
From: mancini@jhunix.hcf.jhu.edu (Todd A. Mancini)
Subject: Re:  Shape from Shading

	Ikeuchi and Horn make several simplifying assumptions in their
original shape from shading algorithm.  I am more interested in
processing real images, so I have begun my work using ray-traced
images.  These images have several properties which are not handled
well by the original algorithm, such as perspective projection,
non-centered objects, and Lambertian models which diminish source
intensity as distance increases.
	My intial models are quadrics (and super-quadrics).  Simpler
methods of enforcing continuity in the smoothness over the surface
given the occlduing boundary work better and faster than Ikeuchi and
Horn's method which also seeks out to use intensity information.
	There are other problems to be dealt with; one involves an
over-simplification corresponding surface orientations to expected
intensity values given a Lambertian model.  In effect, the algorithm
presented in the paper treats all objects as spheres (or ellipsoids)
of unit size.  It is possible to create quadrics which will not render
correctly given their simple definition of a reflectance map.
	I am seeking an algorithm which is more concerned with
observed intensity values in the bit-mapped image to find the
orientation map.  Ideally, such an algorithm would not require the
orientation at the occulding boundary to be input, but by means of a
system similar to edge detection would quickly determine the occluding
boundary and work off of that.  I have already started to show that
for a large set of smooth surfaces, it is enough to determine the
occluding boundary to find the entire surface orientation with a very
high degree of accuracy; information from the intensity map over the
surface will be mainly used to resolve local perturbations from the
smoothness (even in light of image noise.)

	-Todd

------------------------------

Date: 13 Nov 90 22:56:52 GMT
From: leadsv!laic!stiles@decwrl.dec.com (Randy Stiles)
Subject: Motion Detection Using Neural Nets
Keywords: neural networks, motion detection, neuroethology,  machine vision, connectionism
Organization: Lockheed AI Center, Menlo Park

Fellow netters,

Does anyone out there have more info on this development reported last
week by the Wall Street Journal,  concerning visual object recognition
using color to  recognize objects  and motion  detection based on  the
dragonfly's  visual system?   The paper reports   Michael Swain  at U.
Chicago  and Randal  Nelson  at  U.   Rochester, respectively, as  the
researchers on these  systems.  I would  be especially keen to receive
information from them about  these developments.  Please send any info
directly to my email address stiles@laic.lockheed.com.  I  will gather
the messages together and post them to the net.

	Randy Stiles (stiles@laic.lockheed.com)


[ Interesting to see what appears in the popular media.  
  Swain/Wixson/Ballard and Nelson's work is described in the Proceedings
  of the AAAI-90 Workshop on Qualitative Vision. See also Nelson & 
  Aloimonos, PAMI, Oct. 1989; also University of Rochester TRs.
				phil...			]


Wall Street Journal Friday, November 9, 1990:

	Scientists are teaching robots and computers  to see in living
colors and to identify moving objects while in motion themselves.
	In research funded partly  by  the Pentagon's Defense Advanced
Research   Projects Agency (DARPA),  scientists at   the University of
Rochester and   the  University    of  Chicago   report   successfully
programming robots to recognize  multicolored objects solely  by their
colors.   That's a  departure  from the traditional approach, in which
objects are  recognized by    their shape, and  from   machine  vision
experiments, in which robots "see" only single or dominant colors in a
pattern.
	The Rochester robot, for example, was capable of picking out a
box  of  Kellogg's Frosted  Flakes  from among  70  similiarly colored
objects.
	And  while  most  robots   today must  remain  still to detect
movement, another robot in Rochester was programmed to pick out moving
objects while on the go itself.  the robot  was programmed to identify
objects that aren't  moving  in synch with  the movement   of  its own
visual field.
	Randal Nelson, an assistant progessor  of computer science  at
Rochester, says the software was  inspired by  the dragonfly, which is
though to stalk its prey  by identifying small  objects whose movement
doesn't coincide with that of the rest of  the its  visual field as it
buzzes around.
	Michael Swain, a University of Chicago researcher, says he has
received inquireies from companies that view the color vision research
as potentially useful  in commercial applications  such as supermarket
checkout  systems.  "You can't  bar-code a squash," he   says, but the
computer might one day be programmed to recognize one. Mr. Nelson sees
his motion-detection research in future surveillance systems.


------------------------------

Date: Tue, 13 Nov 90 14:10:13 GMT
From: cs_s424@ux.kingston.ac.uk
Subject: Making personal bibliographies public
Organisation: Kingston Polytechnic

Hello

I would like to thank Patrick Flynn for sending his personal bibliography
in to the list. I was able to make use of many references. Perhaps other
members of the vision list could do the same? Or send in a description of
the key areas in your bibliography so that it can be made available on request.

Regards,

Paul Netherwood                      janet    :  P.J.Netherwood@uk.ac.kingston
Research                             internet :  P.J.Netherwood@kingston.ac.uk
                                     phone    :  (+44) 81 549 1366 ext 2923    
                                     local    :  cs_s424@ux.king  
          
School of Computer Science and Electronic Systems,
Kingston Polytechnic, Penrhyn Road, Kingston-upon-Thames, Surrey KT1 2EE, UK.

------------------------------

Date: Tue, 13 Nov 90 19:35:26 +0100
From: John Husoy-stip <jonh@tele.unit.no>
Subject: MPEG/JPEG images, software implementations

The  Joint Photographic Experts Group (JPEG), a working
group under ISO/IEC have recently agreed on a standard for the
compression of digital still images. The Motion Pictures
Expert Group (MPEG) has been concered with the same problem
in conjunction with storage of digial video sequences.

We are currently engaged in image coding research for both
video and still images and would like to test our algorithms
on the images/image sequences that have been used in conjunction
with the standardization activities. Therefore, --
does anybody know of a source where I can get hold of
test images/sequences that has been used by the JPEG/MPEG
working groups? Also is anyone aware of any software implementing
these coding standards, commerically or otherwise.

Any help is greatly appreciated!

	John Haakon Husoy
	The Norwegian Institute of Technology
	Department of Electical and Computer Engineering
	7034 Trondheim - NTH
	NORWAY
	email: jonh@tele.unit.no
	tel:   ++ 47 + 7 + 594453
	fax:   ++ 47 + 7 + 944475

------------------------------

Date: Tue, 13 Nov 90 13:36:47 CST
From: silsbee@vision.ee.utexas.edu (Peter Silsbee)
Subject: video image sequences

Hello, we are doing research in image sequence compression.  For evaluation of
results, it is necessary to record the compressed/decompressed sequences on
videotape.  The equipment to which we have access is not great and we are 
looking for an alternative.  We would appreciate any information about what
equipment other researchers in the field are using that gives high-quality
video.

	Thanks in advance,
	Peter Silsbee

------------------------------

Date: Wed, 14 Nov 90 14:30:54 BST
From: A.ALLEN@aberdeen.ac.uk
Subject: Colour ref.

NJC Strachan, P Nesvadba, and AR Allen, "Calibration of a Video 
Camera Digitising System in the CIE L*u*v* Colour Space",   
Pattern Recognition Letters, (in press, Nov. 1990). 
    
Dr A R Allen
Dept of Engineering, University of Aberdeen, Aberdeen, AB9 2UE, UK. 

------------------------------

End of VISION-LIST
********************




