Date: 22 Jul 91 11:13:53-PST
From: Vision-List moderator Phil Kahn <Vision-List-Request@ADS.COM>
Errors-to: Vision-List-Request@ADS.COM
Reply-to: Vision-List@ADS.COM
Subject: VISION-LIST digest 10.32
To: Vision-List@ADS.COM

VISION-LIST Digest    Mon Jul 22 11:13:53 PDT 91     Volume 10 : Issue 32

 - Send submissions to Vision-List@ADS.COM
 - Vision List Digest available via COMP.AI.VISION newsgroup
 - If you don't have access to COMP.AI.VISION, request list 
   membership to Vision-List-Request@ADS.COM
 - Access Vision List Archives via anonymous ftp to ADS.COM

Today's Topics:

 Face images available via ftp
 UPDATE - Camera Calibration Techniques
 Request for info on electronic aperture camera
 Shape-from-X
 Email address of James D. McCafferty!
 shape analysis
 Call for Papers
 EURASIP Course on Intell. Systems for Signal and Image Understanding

----------------------------------------------------------------------

Date: Fri, 19 Jul 91 17:41:55 EDT
From: Matthew Turk <turk@maidavale.media.mit.edu>
Subject: Face images available via ftp

A set of face images is available via anonymous ftp on
"victoria.media.mit.edu" (net address 18.85.0.121).  There are sixteen
people digitized under three different conditions of illumination,
scale, and head orientation, making up 27 images per person and 432
total face images.  Each image is 120x128x8-bit pixels.  Data format
is raw bytes.

The file is in pub/images/faceimages.tar.Z, almost 6MB (uncompressed
will be almost 7MB).  To get it:

% ftp victoria.media.mit.edu
anonymous
<your login name+address>
cd pub/images
binary
get faceimages.tar.Z
quit

...and then...
% uncompress faceimages.tar.Z
% tar xvf faceimages.tar

A subset of these images is available on the Vision-List archive
(anonymous ftp to ads.com). 

Enjoy,
	Matthew Turk
	MIT Media Lab

------------------------------

Date: Tue, 16 Jul 1991 23:20:55 GMT
From: team3d@bullet.ecf.utoronto.ca 
Organization: University of Toronto, Engineering Computing Facility
Subject: UPDATE - Camera Calibration Techniques

Hi there, all;

     After posting a msg asking for help on the topic of camera
calibration, I was a bit surprised by the number of E-Mail msgs flowing
in, asking me for information... I guess I wasn't the only one working
on this project, or a related one.  Anyways, here is what I found out 
from various sources:

 To put it shortly, Roger Y. Tsai of the IBM T.J. Watson Research Centre
(or, at least, that's where he used to work) is THE KING of calibration.
The two articles (by him) most worthy of attention are:

1] "An Efficient and Accurate Camera Calibration Technique for 3D
    Machine Vision", 1986 IEEE J. Computer Vision and Pattern Recog.
    (I'm not 100% that's the journal, and if it's not, check PAMI, and RA - 
    the article is on p364-374)
2] "A Versatile Camera Calibration Technique for high-accuracy 3D machine
    vision metrology using off-the-shelf TV cameras and lens", 1987 IEEE
    J. Robotics and Automation (RA) vol RA-3, no.4, p323-344

as well as 

3]  "Camera Calibration by Vanishing Lines for 3D Computer Vision",
    by Ling-Ling Wang and Wen-Hsiang Tsai, found in the April 1991
    IEEE Transactions on Pattern Analysis and Machine Intelligence
    (PAMI), p370-376

NOTE!!!! In #1 there is a mistake on p368! I have two copies of the article
     (don't know where the second one is from), and the one I've listed
     above has a 'misprint' - the calculation of the distorted
     coefficients is as follows:  Xd(i)=(Xf(i)-Cx)*dx'/sx    and
                                  Yd(i)=(Yf(i)-Cy)*dy

    If you follow the derivation, that's what you get, not what he has
    listed in the article - it's just a matter of wrong exponents.

Summary - both #1 and #2 will compute all the intrinsic and extrinsic
parameters of the camera using either a planar, or a non-planar test
pattern. If you want an algorithm which works with very little previous
knowledge of the camera lens, either one of the two above articles should  
attract your attention. You'll need to solve two overdetermined systems
of linear equations, as well as a non-linear equation in 4 unknowns
(see Numerical Recipes in C for algorithms, or let me know)

#3 is a new one, for obtaining the camera's position and orientation 
and focal length (lens distortion is assumed known) using a planar polygon
as a test object. I just got this one this morning, so can't say much
about it yet.

Keep in touch, everyone, and hope this helps a bit;

Damian

if you want to respond, PLEASE send your mail to: 
damian@virtual.rose.utoronto.ca
Virtual Reality Group, Department of Computer Engineering,
University of Toronto, CANADA

------------------------------

Date:    Thu, 18 Jul 1991 10:16:52 -0500 (CDT)
From: SEARCY@CERES.TAMU.EDU
Subject: Request for info on electronic aperture camera

We are working on a project which requries the placement of a video
camera on a mobile machine operating in ambient sunlight conditions.
We are currently using an auto-iris lens to control the light
intensity of the image. The camera is mounted in a box isolated with
vibration dampers.  The problem with the current configuration is that
vibration still occurs.  Shock loads experienced by the camera body
and lens (due to engine vibration and bouncing) causes the mechanical
linkage controlling the aperture to flutter. This subsequently causes
the image to flicker.

I was wondering if anyone knows of a camera which can electronicly
control the image brightness. This brightness control should be
similar to an electronic shutter; no moving mechanical parts. The
sensor device could be adjusted through an amplifier that is sensitive
to the amount of light falling on it. Adjustment of the array would be
maintained such that maximum contrast is achieved in the image. We
would prefer a RS170 output if possible.  If anyone has any
information and/or sources, I would certainly appreciate it.

------------------------------

Date: Sat, 20 Jul 91 0:35:35 BST
From: "K.G. Lim" <kgl@eng.cam.ac.uk>
Subject: Shape-from-X

Dear all,

I am looking into the problem of "combining shape-from-x(SFX) algorithms 
using neural network". Any info on previous work or reference will be
much appreciated.

Besides this, I am also looking for the implementation of algorithms
in shape-from-shading, shape-from-stereo and shape-from-texture, preferably 
in C code and running on SUN machines. 

Anyone interested to have the summary of this enquiry please send me your 
e-mail address. Thank you all in advance.

Best regards,
Kok-Guan Lim

Engineering Dept., University of Cambridge
e-mail: kgl@eng.cam.ac.uk

------------------------------

Date: Thu, 18 Jul 91 18:46:49 -0500
From: vasu@ccwf.cc.utexas.edu (srinivasu pappula)
Subject: Email address of James D. McCafferty!

Does anyone know the emal address of JAMES D.McCAFFERTY?  He is the
author of the book: HUMAN AND MACHINE VISION-Computing Perceptual
Organisation and is presently at the BRITISH TELECOM-RESEARCH AND
TECH. DIV.  working as a SYSTEMS AND SOFTWARE ENGR.  

Thanks in advance!

------------------------------

Date: Mon, 22 Jul 91 11:49:17 CDT
From: keith@vision.ee.utexas.edu (Keith Bartels)
Subject: shape analysis

I'm looking for of the references I can get in the following area:
1) 3D Shape and Motion (Shape Change) analysis.
for example:  analysis of CT, MRI etc images
              analysis of range images
              analysis of 3-D microscope images.
              anaysis of any other 3-D data set.

------------------------------

Date: Thu, 18 Jul 91 02:03:05 MDT
From: hic@vision.auc.dk (Henrik I. Christensen)
Subject: Call for Papers
 
                            CALL FOR PAPERS
 
			    Session Themes
				  on
		 ``How to build your own camerahead''
				 and
		``Mobile Robotics and Sensor Fusion''
				   
Special sessions on these themes have been planned for the upcoming
Machine Vision and Robotics conference.  The idea in the camera head
session is to describe camera heads which have been constructed, in
order to indicate solutions for others who might want to build or
buy a similar device. Contributions related to the use of such heads
are also most welcome. In the robotics/fusion session the idea is to
explore the boundary between the vision system and other sensors
and/or the control system. It is well known that vision often should
be studied in the context of the application it is designed for, and
it is often possible to arrive at more robust results if other sensor 
modalities also are utilised.

The Machine Vision and Robotics conference will be held 20-24 April 
1992, at the Marriot Orlando World Center.  The deadline for paper
submissions is 23 September 1991.  (Please also send an e-mail note
to hic@vision.auc.dk if you plan to submit something for this special
theme session.)  Four copies of a 2000 word (or greater) extended 
summary should be submitted to
  SPIE Applications of AI X / Machine Vision & Robotics
  P.O. Box 10
  Bellingham, WA 98227-0010

Papers submitted to the Machine Vision and Robotics conference should 
NOT also be submitted to the Expert and Knowledge-Based Systems part 
of Applications of AI X.  Each paper will be reviewed by two members 
of the program committee and reviews returned to the authors.

The program committee and the conference chair will make a selection 
of the best papers best papers accepted for the Machine Vision and 
Robotics Conference, and these authors will be invited to submit a 
revised version of their paper to a special issue of the journal 
Machine Vision & Applications.

Conference Chair:
Kevin Bowyer, Univ of South Florida (kwb@csee.usf.edu)

Program Committee:
Ron Arkin, Georgia Tech        Bir Bhanu, Univ of California at Riverside
Kim Boyer, Ohio State Univ     Horst Bunke, Univ of Berne (Switzerland)
Chuck Dyer, Univ of Wisconsin  Henrik Christensen, Univ of Aalborg (Denmark)
Ramesh Jain, Univ of Michigan  Dmitry Goldgof, Univ of South Florida
Howard Moraff, NSF             Worthy Martin, Univ of Virginia
Prasanna Mulgaonkar, S.R.I.    Arturo Rodriguez, IBM Multimedia Technology
Ishwar Sethi, Wayne State Univ Mubarak Shah, Univ of Central Florida
Wes Snyder, Wake Forest Univ   Louise Stark, Univ of South Florida
Susan Stansfield, Sandia Labs  George Stockman, Michigan State Univ
Tzay Young, Univ of Miami      Mohan Trivedi, Univ of Tennessee

------------------------------

From: milanese@cui.unige.ch
Date: 22 Jul 91 13:40
Subject: EURASIP Course on Intell. Systems for Signal and Image Understanding

                 EURASIP COURSE ON INTELLIGENT SYSTEMS
                  FOR SIGNAL AND IMAGE UNDERSTANDING
 
                   Udine (Italy), October 1-5, 1991
                           FINAL   PROGRAMME
 
Monday, September 30, 1991
 
3.00 p.m.- 7.00 p.m.: Registration of Participants
           5.00 p.m.    : Official Opening and Invited Lecture
 
********************************************************************
 
Tuesday, October 1, 1991
 
Morning Session  (9.00 - 0.45 p.m.):
 
R. ROHWER (Centre for Speech Technology Research, University of Edinburgh)
 
* Neural Networks with Applications to Signal Processing (I);
 
Afternoon Session (2.00 - 5.45 p.m.):
 
R. ROHWER : Neural Networks with Applications to Signal Processing (II).
 
Evening Session (6.00 - 7.00 p.m.)   : Oral Presentations
 
********************************************************************
 
Wednesday, October 2, 1991
 
Morning Session:
 
C. TASSO  (Dept. of Informatics, University of Udine)
 
* AI Programming Paradigms : Functional, Object*oriented,
  Logic Programming.
 
* Knowledge Representation Techniques (I):
  Knowledge-Based Systems (KBS); Knowledge Representation and Reasoning.
 
Afternoon Session:
 
C. TASSO : Knowledge Representation Techniques (II): Frames,
           Semantic Networks, Production Rules.
 
V. ROBERTO (Dept. of Informatics, University of Udine)
 
* Knowledge-based Approach to Signal Understanding: the Hybrid
  Knowledge Base. Reasoning with signal-objects: associative,
  qualitative.
 
Evening Session : Oral Presentations
 
******************************************************************
 
Thursday, October 3, 1991
 
Morning Session:
 
V. ROBERTO:
 
* Problem-solving Paradigms in Signal Understanding;
  Pattern-Directed Inference Systems; Blackboard-Based Systems.
 
* Meta-knowledge: planning and scheduling interpretive actions.
  Case Study: HORIZONS, a KBS for signal understanding in geology.
 
Afternoon Session:
 
C. LIEDTKE (Inst. Telecommunications and Information Processing,
            University of Hannover)
 
* Automated Configuration of Image Analysis Systems (I).
  Problem Analysis; Knowledge Representation Methods.
 
* Automated Configuration of Image Analysis Systems (II)
  Processing Modules and Strategy. System Architecture.
 
Evening Session:
 
C. LIEDTKE: Practical Results.
 
***********************************************************************
 
Friday, October 4, 1991
 
Morning Session:
 
E. TRUCCO (Dept. of Artificial Intelligence, Univ. Edinburgh)
 
* Vision in Man and Machine (I): Two Paradigms in Artificial Vision
(Witkin-Tenenbaum, Marr); sequential and distributed visual processing.
Recent trends.
 
* Vision in Man and Machine (II): Perceptual Organisation.
Processing and integration of MULTIPLE-SCALE visual information.
Geometrical Models and Matching. Case Study: the IMAGINE system.
 
Afternoon Session:
 
E. TRUCCO: Vision in Man and Machine (III): Perception by parts. Volumetric
Segmentation. Non-geometrical Modelling: Integration and use of combined
information. Case Studies.
 
 
Y. DEMAZEAU (Laboratory of Fundamental Informatics
             and Artificial Intelligence, Grenoble)
 
* Data Fusion and Dynamical World Modelling: Principles of
  Perceptual Fusion. Fusion of Numerical Data: estimation theories,
  entropy-based methods.
 
 
Social Event (8.00 p.m.)
 
*********************************************************************
 
Saturday, October 5, 1991
 
Morning Session:
 
Y. DEMAZEAU:
 
* Fusion of Symbolic Data. Several approaches: Bayesian, Dempster-Schafer,
  Fuzzy Logic, ATMS, Blackboard, Multi-Agent. Case Study: the SATURNE
  scene understanding system.
 
* Autonomous Agents: Structure and basic principles. Distributed
  Artificial Intelligence (DAI) systems: a survey. Belief revision.
  Biological systems, reactive systems. Applications.
 
2.00 p.m.: End of the Course.
 

For informations concerning the Course, and available facilities
for students and younger researchers, please contact the
Course Coordinator: prof. V.Roberto
Dept. of Informatics, University of Udine
Via Zanon, 6;  I-33100 Udine, Italy
Fax:  +39 432 510755
E-mail: roberto@uduniv.cineca.it

------------------------------

End of VISION-LIST digest 10.32
************************
