From newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!usc!cs.utexas.edu!sun-barr!olivea!uunet!trwacs!erwin Tue Jun 23 13:21:00 EDT 1992
Article 6291 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:6291 comp.ai.neural-nets:3268
Path: newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!usc!cs.utexas.edu!sun-barr!olivea!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy,comp.ai.neural-nets
Subject: Spectral Data Processing
Message-ID: <636@trwacs.fp.trw.com>
Date: 17 Jun 92 22:02:08 GMT
Followup-To: comp.ai.philosophy
Organization: TRW Systems Division, Fairfax VA
Lines: 59


Robert Morris comments:
>...
>I didn't see your original posting, so I am not sure of the context of
>your question 2. But human vision is widely regarded has having
>independent mechanism for form and color. Thus for form vision, the
>relevant spectral issues are for spatial frequency, not
>electromagnetic frequency. There is a large literature on spatial phase
>processing. The single most interesting class of results surrounds
>"hyperacuities", i.e. acuities which seem to violate the Nyquist
>sampling theorem. For example, the optics of the eye and the spacing
>of retinal cones implies a maximum spatial frequency response of 60
>cyles per degree of visual angle, i.e. separation acuity of about 1
>minute of visual angle (and this agrees with psychophysical
>measurement). But vernier acuity --- the ability to determine the
>offset of one line from another, or to detect the phase difference in
>a pair of cosine gratings---is 5-10 times the separatation acuity (as
>measured, for example by reliable discrimination of a cosine grating
>from mean grey). Post retinal neural processing models can and do
>account for these kinds of phenomena.

>A good place to read about the current state of the subject is in the
>excellent survey "Visual Perceptiuon: The Neurophysiological
>Foundations", edited by Lothar Spillman and John Werner, Academic
>Press, 1990.

>Bob Morris

I appreciate Bob Morris's useful comments. They caused me to do some
thinking about visual information processing, which I propose to post
here. The following is speculative, but suggestive:

1. Sensory data processing has two modes, scrutinization and scanning. You
scrutinize for detail and scan for texture (spacial frequency). Scanning
is actually more effective if you defocus. (In zen, this is called
"no-mind" if taken to an extreme.) Driving home this afternoon, I watched
myself maintain formation with traffic using aural scanning, and I found I
could actually "see" the gabor functions associated with each sound
object. 

2. What this means is that Pribram's "holonomic" brain theory may actually
apply to the scanning mode of sensory data processing. I'm still quite
dubious about "holographic" processing, but it does appear that adaptive
beam forming does have a role to play.

3. I'm not sure where this leads. Any suggestions? The discussion of the
brain as a transducer may have some relation to this speculation. Is the
sense of self associated with scanning or scrutinization? 

4. When we analyze the information processing of human groups, we tend to
concern ourselves with scrutinization, yet most of the time, the group is
performing scanning. Ditto for person-to-person relationships. How do we
scan another person? KP likes the term "vibes" in this context, and so do
I for some reason.

Cheers,
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com


