From vision@deimos.ads.com Fri Oct  7 19:09:51 1988
Received: from ads.com by deimos.ads.com (5.59/1.11)
	id AA06007; Fri, 7 Oct 88 19:06:21 PDT
Received: from deimos.ads.com by ads.com (5.59/1.17)
	id AA12113; Fri, 7 Oct 88 19:05:36 PDT
Received: by deimos.ads.com (5.59/1.11)
	id AA05949; Fri, 7 Oct 88 18:55:44 PDT
Message-Id: <8810080155.AA05949@deimos.ads.com>
Date: 07 Oct 1988 18:52:51-PST
From: Vision-List moderator Phil Kahn <Vision-List-Request@ads.com>
Errors-To: Vision-List-Request@ads.com
Reply-To: Vision-List@ads.com
Subject: Vision-List delayed redistribution
To: Vision-List@ads.com
Status: RO

Vision-List Digest	Fri Oct  7 18:52:52 PDT 1988

 - Send submissions to Vision-List@ADS.COM
 - Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

 range finders
 Stanford seminars
 Re: Circle Detection Literature
 report alert

----------------------------------------------------------------------

Date: Wed, 5 Oct 88 14:58:11 JST
From: Werman michael <werman%humus.Huji.AC.IL@CUNYVM.CUNY.EDU>
Subject: range finders

    I would  appreciate  receiving any  information
about purchasing (<20k)   or  setting up  a  range finder
that is suitable for research in a laboratory setting.
    
    werman%humus.huji.ac.il@relay.cs.net
    werman@humus.bitnet


------------------------------

	Date: Wed, 5 Oct 88 13:20:19 PDT
	From: binford@Boa-Constrictor.Stanford.EDU.stanford.edu (Tom Binford)
	Subject: robotics seminar
	
	
	Oct 10, 1988
	
	
	      Ziv Gigus, UC Berkeley 
	      Robotics Seminar
	      October 10
	      Cedar Hall
	      4:15pm
	
	
	
	                        Abstract
	
	The aspect graph is one of the approaches to representing 3-D shape 
	for the purposes of object recognition.  In this approach, the viewing 
	space of an object is partitioned into regions, such that in each 
	region the topology of the line drawing of the object does not
	change.  The viewing data of an object is the partition of the viewing 
	space together with a representative view in each region.
	We present an efficient algorithm for computing the viewing data for
	line drawings of polyhedral objects under orthographic projection.
	For an object of size O(n) whose partition of size O(m), the algorithm 
	runs O(n^4\log n + m\log m) time.  Using a novel data structure, we 
	construct the set of all views in optimal O(m) time and space.
	
	
	
	
	
	
	      Robotics Seminar
	      October 17
	      Cedar Hall
	      4:15pm
	
	
	
	
	
			Bayesian Modeling of Uncertainty
			      in Low-Level Vision
	
				Richard Szeliski
			Schlumberger Palo Alto Research
	
	Over the last decade, many low-level vision algorithms have been 
	devised for extracting depth from intensity images.  The output of such
	algorithms usually contains no indication of the uncertainty associated
	with the scene reconstruction.  The need for such error modeling is
	becoming increasingly recognized, however, both because of the
	uncertainty inherent in sensing, and because of the desire to integrate
	information from different sensors or viewpoints.
	
	In this thesis, we develop a new Bayesian model which describes the
	uncertainty associated with dense fields such as depth maps.  The
	Bayesian model consists of three components:  a prior model, a sensor
	model, and a posterior model.  The prior model captures any a priori
	information about the structure of the dense field.  We construct this
	model by using the smoothness constraints from regularization to define
	a Markov Random Field.  The sensor model describes the behavior and
	noise characteristics of our measurement system.  We develop a number
	of sensor models for both sparse depth measurements and dense flow and
	intensity measurements.  The posterior model combines the information
	from the prior and sensor models using Bayes' Rule, and can be used as
	the input to later stages of processing.  We show how to compute 
	optimal estimates from the posterior model, and also how to compute the
	uncertainty (variance) in these estimates.
	
	This thesis applies Bayesian modeling to a number of low-level vision
	problems.  The main application is the on-line extraction of depth from
	motion.  For this application, we use a two-dimensional generalization
	of the Kalman filter to convert the current posterior model into a
	prior model for the next estimate.  The resulting incremental algorithm
	provides a dense on-line estimate of depth whose uncertainty and error
	are reduced over time.  In other applications of Bayesian modeling, we
	use the Bayesian interpretation of regularization to choose the optimal
	smoothing parameter for interpolation; we use a Bayesian model to
	determine observer motion from sparse depth measurements without
	correspondence; and we use the fractal nature of the prior model to
	construct multiresolution relative surface representations.  The
	uncertainty modeling techniques which we develop, and the utility of
	these techniques in various applications, support our thesis that
	Bayesian modeling is a useful and practical framework for low-level
	vision.
	
	

------------------------------

Date: Wed, 5 Oct 88 23:12:46 CDT
From: schultz@mmm.3m.com (John C Schultz)
Subject: Re: Circle Detection Literature
Organization: 3M Company - ES&T; St. Paul, MN


>From: "x.cao" <eecao%PYR.SWAN.AC.UK@cunyvm.cuny.edu>
>
>I am looking for information on image processing  algorithms
>and architectures especially suited to detection of circles
>in 2D digital images. In particular I am interested in parallel
>systems and real-time operation.
>

Just get or do a literature search on the Hough and/or Radon transform
and generalizations of them.  These techniques work, for circles, by
testing all possible combinations of radii, and x,y location.  The
testing is performed by an array of "accumlators" which gets
incremented at a particular (x,y,radii) location for each possible
edge point of a circle (binary image).  The obvious problem is that
arbitrary circle location and size requires a 3D accumulator array
which must be searched for "peaks" to detect circles.  Generally
people resrict the radii to one or a small number of possible
searches.

Another technique is to pick three points on a binary curve and
calculate the center of the fitting circle.  Do this at incremental
distances around the curve and the peak "accumlation" will be the
circle center and the peak "height" will be related to the circle
quality.  The algorithm complexity is determined by the scene
complexity and the degreee of precision required.

The last technique I know about is yet a bit different and would run
in hardware (I did it for 8 bits).  Quantize edge direction from, say
a Sobel, into 8 directions. Assign the normal direction to 1 bit of a byte.
e.g. N = 1, NE = 2, E = 4, ... Then dilate the edge directions
only in the appropriate direction.  e.g.
                                          1
                           1          2   1
       1                2  1           2  1
     2           ->      2 1   ->       2 1
                           2              3

As you can (maybe) see, the center of a perfect circle would have a
peak of 255 at the exact center and no other point would have that
value.  This approach can actually tell the difference between the
ellipse of an "O" and the circle of an "o".  The drawback is that
circles of radius R require R passes to detect and there is (always)
noise in the image.  The nice feature is that the peak height is
constrained and the processing is very amenable to hardware
implementation. 

Potentially higher precision angular descrimination could be done if
more than 8 bit operations were used.

All these techniques are from the open literature but I do not
have my references available.

-- 
   john c. schultz         schultz@mmm.3m.UUCP          (612) 733-4047
           3M Center, Bldg 518-1-1, St. Paul, MN 55144-1000
  The opinions expressed herein are, as always, my own and not 3M's.

------------------------------

Date: Fri, 7 Oct 88 09:55:29 +0100
From: prlb2!ronse@uunet.UU.NET (Christian Ronse)
Subject: report alert


REPORT ALERT: Mathematical Morphology
=====================================

The Algebraic Basis of Mathematical Morphology; Part I: Dilations and Erosions

	H.J.A.M. Heijmans, CWI (henkh@mcvax.UUCP)
	C. Ronse, PRLB (ronse@prlb2.UUCP)

ABSTRACT: Mathematical morphology is a theory of image transformations and
functionals deriving its tools from set theory and integral geometry. This
paper deals with a general algebraic approach which both reveals the
mathematical structure of morphological operations and unifies several
examples into one framework. The main assumption is that the object space is a
complete lattice and that the transformations of interest are invariant under
a given abelian group of automorphisms on that lattice. It turns out that the
basic operations called dilation and erosion are adjoints of each other in a
very specific lattice sense and can be completely characterized if the
automorphism group is assumed to be transitive on a sup-generating subset of
the complete lattice. The abstract theory is illustrated by a large variety of
examples and applications.

AMS 1980 Mathematics Subject Classification: 68U10, 68T10, 06A23, 06A15.

KEYWORDS: mathematical morphology, image processing, Minkowski addition and
subtraction, complete lattice, automorphism, duality, dilation, erosion,
adjunction, Galois connection, upper- and lower adjoint,
translation-invariance, abelian automorphism group, sup-generating family,
increasing operator, Matheron's theorem, grey-level function, additive
(multiplicative) structuring function.


------------------------------

End of VISION-LIST
********************

