Date: 07 Dec 93 11:46:04-PST
From: Vision-List moderator Phil Kahn <Vision-List-Request@TELEOS.COM>
Errors-to: Vision-List-Errors@TELEOS.COM
Reply-to: Vision-List@TELEOS.COM
Subject: VISION-LIST digest 12.56
To: Vision-List@TELEOS.COM

VISION-LIST Digest    Tue Dec 07 11:46:04 PDT 93     Volume 12 : Issue 56

 - ***** The Vision List host is TELEOS.COM *****
 - Send submissions to Vision-List@TELEOS.COM
 - Vision List Digest available via COMP.AI.VISION newsgroup
 - If you don't have access to COMP.AI.VISION, request list 
   membership to Vision-List-Request@TELEOS.COM
 - Access Vision List Archives via anonymous ftp to FTP.TELEOS.COM

Today's Topics:

 Moments from boundary.
 Request for suggestions of good character recognition texts
 Feature selection software available
 Image processing
 Announcing a new book
 Workshop on Motion of Nonrigid & Articulate Objects
 CFP: The 1994 AAAI Robot Building Laboratory (RBL-94)
 CALL FOR PAPERS ICSPAT'94 - DSP WORLD EXPO.

----------------------------------------------------------------------

Date:  1 Dec 93 15:41 -0800
From: Esfandiar Bandari <bandari@cs.ubc.ca>
Subject: Moments from boundary.

Please post this if appropriate.

A while back I inquired about finding the moments of a region if the
boundary was given (this is in terms of integer coordinates of the pixels).
There were quite a few replies, as well as requests for redirection of
answers etc. 

Here is a possible very simple procedure (motivated by a very very
recent request).  Given the area inside a region all the moments are
easily calculated by standard methods in Horn's (PKBH), or Ballard and
Brown's (B&B) books.  So here is the solution:

        . Find the min and max values in rows and columns of pixels.  
	. Put a rectagular box over the curve, so that the curve does not 
touch the box -- i.e., boundary of box is from (xmin-1, ymin-1) to 
(xmax+1, ymax+1).
	. Then use simple recursive region growing techniques (making sure 
you do not go thorught the boundary at diagonally attached boundary pixels) 
distinguish ground from figure by starting from one of the rectangles corners.
	. Using the algorithm in PKBH or B&B find the moment.

If someone already recommended this and I missed it (I doubt it
though) my sincere appologies.  I am hoping that this helps everyone
that has been looking into this topic.

Personally, I ended up using some symmetric properties of the curves
that I was looking at to get my answer really really fast.  

					--- Esfandiar

------------------------------

Date: Mon, 6 Dec 1993 16:34:08 GMT
From: lowell@monet.uwaterloo.ca (Lowell Winger)
Organization: University of Waterloo
Subject: request for suggestions of good character recognition texts

Hi,

I've just finished implementing an offline printed character
recognition scheme using templates.
I use very poor quality images, so I have to do some fancy
pre-processing, thresholding, and rotation, etc. before recognition.
My results are ok if I get a large enough training set for each
particular font.

I'd like to test out a bunch of alternative schemes. 
I've read a lot of papers about omni-font recognition using
feature extraction, and graph based recognition algorithms.

What I need is a really up to date book which goes into the details
associated with implementing this and other modern character recognition
schemes. There are a lot of commercial products out there presumably
using this type of technique for OMNIFONT recognition, so there's got
be a good text somewhere.

Please mail me about any good character recognition texts you know of.

Thanks.
Lowell Winger
lowell@monet.uwaterloo.ca

------------------------------

Date: Fri, 3 Dec 1993 14:09:48 +0000
From: tr@fct.unl.pt (Thomas W. Rauber)
Subject: Feature Selection Software Available

*****      FEATURE SELECTION SOFTWARE AVAILABLE       ****
*****                                                 ****
*****                tooldiag 1.4                     ****

MOTIVATION:

Classifiers that use ALL possible features are in general
 - more complex
 - less reliable
than classifiers that use only a subset of all possible features.
For instance a neural network that wants to classify a 10-class problem
using its 512-valued Fourier spectrum and a F---(2F+1)---C
fully interconnected 3-layer feedforward architecture, has 1546 neurons,
where F is the number of features and C is the number of classes.

If only the 10 most relevant features were selected, the complexity
of the net would reduce to 41 neurons. Furthermore its classification
accuracy would eventually be increased.


FEATURE SELECTION:

The software package "tooldiag" performs a feature selection. Many concepts
of the book:
  Devijver, P. A., and Kittler, J., "Pattern Recognition --- A Statistical
  Approach," Prentice/Hall Int., London, 1982.
are implemented, including the optimal BRANCH & BOUND search strategy,
together with several different selection criteria.


ADDITIONAL CAPABILITIES:

 - An error estimation can be performed, using the Leave-One-Out method
   and a K-Nearest-Neighbor classifier.
 - A learning module (Q*) is included that has the same functionality
   as the LVQ (Learning Vector Quantization)


INTERFACING:

The system has interfaces to

	- LVQ_PAK - The implementation of the Learning Vector Quantization
			(see FAQ of comp.ai.neural-nets)
	- SNNS - The Stuttgart Neural Network Simulator
		A pattern file (.pat) can be generated, using only the selected
		subset of features. Besides a simple F---(2F+1)---C backprop
		net is generated (.net).

The data file format is compatible with that of LVQ_PAK.
2-D graphics are displayed with the help of the GNUPLOT public domain
plotting package.


RESTRICTIONS:

1.) Only continuous (or ordered discrete) numerical features
2.) No missing values


HOW TO GET IT:

A documentation and the source code in C is provided. The system was tested
on many platforms (IBM, DEC, NeXT, SUN - workstations, DOS ) and is very
easy to install.

Location: Anonymous FTP at:
SERVER: ftp.fct.unl.pt
DIRECTORY:  pub/di/packages
FILE:  tooldiag-1.4.tar.Z


Enjoy

|         Thomas W. Rauber         | BITNET/Internet: tr@fct.unl.pt      |
|__________________________________|                                     |
|   Universidade Nova de Lisboa    | Fax:   (+351) (1) 295-7786          |
|   Intelligent Robotics Center    | Phone: (+351) (1) 295-7787          |
|  2825 Monte Caparica, PORTUGAL   |                                     |

------------------------------

Date: 7 Dec 1993 08:30:30 GMT
From: eng00972@leonis.nus.sg (Yee Kai Min)
Organization: National University of Singapore
Subject: image processing

	I'm currently doing my final year in Mechanical Engineering at 
the National University of Singapore. My final year project in Computer 
vision requires me to identify triangles and parallelograms of different 
shapes using a frame grabber. My problem now is how to detect the corners
(coordinates) of the image of a triangle or parallelogram. Anyone with 
ideas on how to effectively detect the corners please kindly help.
	Thank you.

------------------------------

Date: Mon, 6 Dec 93 11:22:45 EST
From: kumar@sarnoff.com (Rakesh Kumar x2832)
Subject: Announcing a new book

HI 

     I wish to make the community aware of this interesting new computer
vision text book with lots of pretty pictures. The book is entitled:

TITLE: "A GUIDED TOUR OF COMPUTER VISION" 
AUTHOR:  Vishvjit S. Nalwa of AT&T Bell Labs.
Publisher: ADDISON-WESLEY Publishing Company. (Ph. No. 1-617-944-3700)
ISBN: 1-201-54853-4
Price: 34.95 US dollars

The book is written at three  levels of detail. For the curious browser, 
there are figures which together with the captions are self-explanatory and 
form a complete story. For the reader with greater interest, there is the text
and finally for the serious student, there is appendices and footnotes which 
delve into mathematical detail. There are chapters on Image Formation, Edge 
Detection and Image Segmentation, Line-Drawing Interpretation, Shading, 
Texture, Stereo, Motion and Shape Representation.

Rakesh (Teddy) Kumar
David Sarnoff Research Center               Phone: 609-734-2832
CN5300, Princeton, NJ-08543                 EMAIL: kumar@sarnoff.com

------------------------------

Date: 3 Dec 1993 15:39:14 -0600
From: jka@emx.cc.utexas.edu (J. K. Aggarwal)
Organization: The University of Texas - Austin
Subject: Workshop on Motion of Nonrigid & Articulate Objects

                                 PRELIMINARY
                         C A L L   F O R   P A P E R S

                         IEEE Computer Society Workshop
                                      on
                     Motion of Nonrigid & Articulate Objects

                        Austin Marriott at the Capitol
                                Austin, Texas
                            November 11-12, 1994

CONFERENCE CO-CHAIRS:

J. K. (Jake) Aggarwal			Thomas S. Huang
Computer & Vision Research Center	Coordinated Science Laboratory
University of Texas at Austin		University of Illinois
Austin, Texas 78712-1084		Urbana, IL  61801
jka@emx.cc.utexas.edu			huang@uicls.csl.uiuc.edu

PROGRAM COMMITTEE:

K. Aizawa, University of Tokyo
P. Anandan, Sarnoff
N. Ayache, INRIA
K. Bowyer, University of South Florida
J. Duncan, Yale University
D. Goldgof, University of South Florida
W. Martin, University of Virginia
D. Metaxas, University of Pennsylvania
A. Mitiche,  INRS-Telecommunications
A. Pentland, MIT
J. Prince, Johns Hopkins University
H. S. Sawhney, IBM Almaden
D. Terzopoulos, University of Toronto
Y. F. Wang, University of California, Santa Barbara

WORKSHOP PROGRAM:

The workshop will focus on the image-based analysis of
nonrigid motion, including the motion of multibody, articulate,
deformable and fluid objects.

The following topics are suggested; however other topics are 
also welcome.

Acquisition of articulate shape models
Analysis-by-synthesis techniques
Analysis of dynamic medical images
Computation of motion fields
Constrained multibody dynamics
Deformable models
Human motion analysis (facial motion, gesture, gait)
Model-based methods for  shape and motion estimation 
Nonrigid and articulated object recognition, tracking
	and motion analysis
Part identification
Physics-based modeling techniques
Reasoning about object functions
Recursive estimation techniques
Scenes with multiple moving independent objects


SUBMISSION OF PAPERS:

Three copies of the full paper, including figures and drawings (12 
point type, double spaced, not exceeding 20 pages in length) should be 
submitted to J. K. Aggarwal at the address above.  Papers must be 
received no later than  March 15, 1994, to be considered.  Notification 
of acceptance will be sent to the authors by June 1, 1994, and full 
camera-ready papers must be returned by July 15, 1994.  


WORKSHOP ENVIRONMENT:

The workshop will be held in the Austin Marriott at the Capitol, 
located in downtown Austin.  Noted for its accommodations, service, and 
amenities, the Marriott is one of the six hotels affiliated with the 
First IEEE International Conference on Image Processing which will be held 
November 13-16, 1994 at the Austin Convention Center.  The hotel offers 
an indoor/outdoor pool, health club, and nearby golf, tennis, and water 
sports.  The Austin Convention Center, the 6th Street entertainment 
district, and the UT Austin campus are within walking distance of the 
hotel.  


LOCAL ARRANGEMENTS:

For additional information on the workshop or local arrangements, 
please contact:

Ms. Debi Paxton
Computer and Vision Research Center
University of Texas  (ECE Dept.)
Austin, Texas 78712-1084

Phone:	512/471-3259
Fax:	512/471-5532
Email:	dpaxton@emx.cc.utexas.edu


------------------------------

Date: Fri, 3 Dec 93 23:00:10 EST
From: wlim@gdstech.grumman.com (Willie Lim)
Subject: CFP: The 1994 AAAI Robot Building Laboratory (RBL-94)

			 CALL FOR PARTICIPATION

	    The 1994 AAAI Robot Building Laboratory (RBL-94)


Introduction

   If you missed  the fun and  excitement of participating in the  Robot
Building Event of  AAAI-93, here is  your chance  to  participate in its
formal successor: the 1994 AAAI Robot Building Laboratory (RBL-94) to be
held in conjunction with AAAI-94 in Seattle, Washington.

   Never built a robot before?  No problem!  RBL-94 will provide you the
opportunity   to build one    using  a variety   of sensors,  motors,  a
micro-controller board, and toy  parts. By programming it yourself using
C or Lisp, you will endow your robot with its own personality and smarts
to compete against others in a series of contests.

   So you have  been working  in AI  or developing  theories for robots?
Ever  wonder how fast  you can build  a working robot  to test out  your
ideas? RBL-94 is your answer. It is a facility  for rapid prototyping of
small  robots.   These  robots may  lack  the industrial strength  robot
precision and repeatability.  They may  also lack the reasoning power of
larger robots.  However  they make up  for it  by being cheaper, easier,
and  faster  to build.  They are  also  good  replacements  for computer
simulations and theories by forcing you  to  deal with the  real world -
imperfect     sensors, motors,   wheels,  finite  energy   sources (viz.
batteries) and yes things do wear out and break  in the real world.  See
what you can  do with your ideas with  real working robots. See how much
of your experience you can impart to your robot.

   Perhaps if you have done things a little differently  you  might have
won the AAAI-93 robot  building  event. Perhaps  you should have built a
little more aggressiveness into your  robot. May be  you should not have
used  that world  map.  Or  may   be you could  have replaced  that wall
following  behavior with something  neater.   Well  here is your  second
chance.  Participate in RBL-94 and build it right; build to win.

   Can  your robot  outwit the others?  You may discover  novel and neat
ways to do things.  Think of  the excitement, the possibilities, the fun
you will have at RBL-94. So do not miss it, participate in RBL-94.


Structure of RBL-94

   RBL-94   is composed  of   three  major building  blocks: Jump  Start
Session,  laboratory,  and   contests.  We strongly  recommend  that all
participants attend the half-day Jump Start Session given by  members of
the organizing committee  on Sunday  morning,  July 31, 1994.  The  Jump
Start    Session will  focus exclusively   on   providing  the necessary
background and practical advice on robot building.

   RBL-94 participants  must belong to  a  team of 4  (3  is permitted).
Participants should form teams  as  quickly as possible. Those  who  are
unable  to form   their own team will   be grouped  into   teams by  the
organizing committee.

   The laboratory will  begin  immediately   following the  Jump   Start
Session.  Robot  kits  will be   distributed to teams    at  this  time.
Laboratory work  continues (round the clock   as  necessary), until  2pm
Thursday, August 4, 1994, when the final contest starts.

   Each team competes in a series of contests.  These contests will take
place daily with the final contest to be held the afternoon of Thursday,
August 7, 1994.

   Each contest  is  designed to require  teams to build more   and more
capabilities  into their robot.   The  contest-paced robot evolution  is
designed to  help teams effectively  manage their development time.   It
ensures early feedback, gives teams a chance to  catch up, maximizes the
number of robots  ready  for the final   (most  difficult and  exciting)
contest, and improves  participant satisfaction. The  final contest will
include   random elements (e.g.,   obstacles, doors,  etc),  designed to
encourage robust   robot solutions  and cooperative  and/or  adversarial
robot interaction. Detailed contest formats  and  rules will be provided
at a later date.


Preliminary Schedule

    The following is the preliminary schedule for RBL-94. The organizing
committee reserves the right to revise it.


      Sunday, July 31      9:00 - 12:30   RBL-94 Jump Start Session
      Sunday, July 31	     13:00	  RBL-94 starts
      Monday, August 1	     18:00	  First contest
      Tuesday, August 2	     18:00	  Second contest
      Wednesday, August 3    18:00	  Third contest
      Thursday, August 4     14:00	  Final contest



Organizing Committee


William Lim (Chair)       Grumman Corporation
                          Phone: (516) 575-4909 (voice), (516) 346-3670 (fax)
                          Email: wlim@grumman.com

Jeffrey S. Graham         Woodbridge, Virginia
                          Phone: (703) 221-3677 (voice)
                          Email: j85@delphi.com

Henry Hexmoor             SUNY at Buffalo
                          Phone: (716) 645-3197 (voice), (716) 645-3464 (fax)
                          Email: hexmoor@cs.buffalo.edu

Gerhard K. Kraetzschmar   Bavarian Research Center for
                          Knowledge-Based Systems (FORWISS)
                          Phone: +49-9131-691-193(voice), +49-9131-691-185(fax)
                          Email: gkk@forwiss.uni-erlangen.de

------------------------------

Date: Thu, 2 Dec 1993 01:16:51 GMT
From: DSPWorld@world.std.com (Amnon Aliphas)
Organization: The World Public Access UNIX, Brookline, MA
Subject: CALL FOR PAPERS ICSPAT'94 - DSP WORLD EXPO.

                             CALL FOR PAPERS  -   ICSPAT '94

    International Conference on Signal Processing Applications & Technology

                            featuring DSP World Expo.

	 October 18-21,1994 Grand Kempinski Hotel - Dallas, Texas


ITF Product Reviewer
~~~~~~~~~~~~~~~~~~~~    |      Application Areas:  Aerospace
Mr. Nicolas Mokhoff	|
Electronic Eng. Times	|			   Audio
USA                     |
                        |                          Automotive
Technical Review Comm.	|
~~~~~~~~~~~~~~~~~~~~~~  |		           Communications
Dr. David Almagor       |
National Semiconductor 	|	                   Consumer Products
Israel		        |
                	|	                   DSP Machines
Mr. Pradeep Bardia      |
Sonitech International  |                          DSP Software
USA                	|
                        |       		   DSP Technology
Dr. Aziz Chihoub        |    
Siemens Corporate Res.	|			   Geophysics
USA	                |
		        |                          Image Processing
Dr. Ron Crochiere       |
AT&T Bell Laboratories  |                          Industrial Control
USA		        |
			|			   Instrumentation & Testing
Dr. Mohamed El-Sharkawy |
Indiana U./Purdue U.    |                          Medical Electronics
USA			|			
			|			   Multimedia
Dr. Joseph B. Evans     |    	                   
University of Kansas    |			   Neural Networks
USA			|
			|             		   Parallel Processing
Dr. Hyeong-Kyo Kim	|
ETRI	                |			   Processor Architectures
Korea			|
			|			   Radar
Mr. Gerald McGuire	|
Analog Devices		|			   Radio SATCOM & NAV
USA			|
			|			   Robotics
Dr. Bruce Musicus	|
Bolt Beranek & Newman	|			   Speech Processing
USA			|
			|			   Telephony
Dr. Panos Papamichalis  |
Texas Instruments	|			   Underwater/Sonar
USA			|
                        |			   VLSI Architectures
Mr. Robert A. Peloso    |
Panasonic, ATVL.        |			   Virtual Reality
USA                     |
                        |			   & Other Applications
Dr. Matt Perry          |
Motorola                |
USA                     | 
			| Mail, Fax or E-Mail 400-Word Abstract by April15,1994
Dr. William Ralston	|
The Mitre Corporation	|	DSP ASSOCIATES		Tel: (617) 964-3817
USA			|	18 Peregrine Rd.	Fax: (617) 969-6689
			|	Newton, MA 02159
Dr. James B. Riley	|
MIT - Lincoln Lab.	|	e_mail: DSPWorld@world.std.com
USA			|
			|
Mr. Vojin Zivojnovic	|
RTWH			|
Germany			|

Sponsored by:	   DSP Associates -- Electronic Engineering Times

------------------------------

End of VISION-LIST digest 12.56
************************
