From newshub.ccs.yorku.ca!ists!torn!utgpu!utcsri!rutgers!uwm.edu!cs.utexas.edu!sun-barr!olivea!uunet!trwacs!erwin Tue Jun 23 13:20:55 EDT 1992
Article 6281 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn!utgpu!utcsri!rutgers!uwm.edu!cs.utexas.edu!sun-barr!olivea!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy
Subject: Re: Spectral Data Processing
Message-ID: <635@trwacs.fp.trw.com>
Date: 17 Jun 92 16:34:35 GMT
References: <634@trwacs.fp.trw.com> <1992Jun16.150119.18090@mp.cs.niu.edu>
Organization: TRW Systems Division, Fairfax VA
Lines: 33

Again a thanks to everyone who responded. A model of vision that involves
diffuse holography requires interference between a diffracted signal beam
and an undiffracted reference beam, i.e., phase data has to be acquired by
the eye and passed to the sensory cortex. I think it is clear that there
is no realistic mechanism for doing that. A possible alternative is that
the saccadic motion of the eyeball serves as the reference standard, but
then I don't see how the diffracted signal beam could be created--instead
you would get periodic motion of an image, and the only thing that could
cause the signal beam to differ from the reference beam would be motion of
the object(s) being viewed...

This is leading to a model of sensory processing that allows phase,
amplitude, and texture (spatial frequency!) data to be associated with
specific objects but does not involve holography in the strict sense.
Instead it's more like the concept of adaptive beam forming that is used
in radar and sonar systems. Rather than objects being imaged automatically
via a Fourier transform on data written into the sensory cortex, the
object images are identified by a series of processes, some operating in
the frequency domain and some in the spatial domain. The brain doesn't do
holography, but it does do wavelet transforms for specific purposes.

There are some problems in defining the processing in the frequency
domain, since we would like to be able to decompose an image into
subbands, each of which is processed independently. Unfortunately, there
is no orthonormal basis that allows decomposition into translation,
rotation, and dilation components. Hence, you can have two pictures of the
same object and not recognize that they are the same. Has anyone been
looking at this issue?

Cheers,
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com


