From newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!uunet!trwacs!erwin Wed Sep 23 16:54:34 EDT 1992
Article 6991 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:6991 comp.ai.neural-nets:4363 sci.cognitive:437
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy,comp.ai.neural-nets,sci.cognitive
Subject: Conference Report
Keywords: neurodynamics
Message-ID: <723@trwacs.fp.trw.com>
Date: 20 Sep 92 22:21:07 GMT
Followup-To: sci.cognitive
Organization: TRW Systems Division, Fairfax VA
Lines: 195


This is an initial report on the First Appalachian Conference on
Behavioral Neurodynamics: Processing in Biological Neural Networks, held
at Radford University, Virginia, September 17-20, 1992. This conference
was organized by Karl Pribram and his colleagues to celebrate the opening
of his Brains Center at Radford.

A colleague once said about Karl Pribram: "He has strange ideas, but his
labs always do excellent research." At this conference, I found out why.
150 of us met for a most stimulating series of papers on the current state
of neurodynamics.

Karl began the conference by speaking on the issues as he saw them. 1)
Where is non-linearity in the brain? 2) How do we get (in non-linear
dynamics) stabilities far from equilibrium, especially in quantum models?
Then he emphasized the need for data.

Harold Szu then spoke on "A paradigm shift for Neural Network Theory:
Collective Behavior of Thousands of Chaotic Elements." (Originally
"elements" was "neurons," but Karl insisted on the title change.) Harold
discussed simulation work with large networks, "sick" neurons, and
learning. 

Next, Paul Werbos presented "Chaotic Solitons, Computation, and Quantum
Field Theory." He started by identifying three approaches to using quantum
field theory on the brain:
1. metaphor (he claimed Pribram, Dawes, and Jahn fit here. KP differs.)
2. as a mathematical tool (Werbos, here),
3. to analyze the physical substrate (Deutsch, Penrose, Hameroff,
Yasue/Jibu)
He then analyzed Pribram's current holographic memory model in the
following terms:
1. complicated linear settling/equilibrium process (linear Hopfield net),
followed by a
2. simple non-linear readout.
He then pointed out that if you ignore learning, this is not an
important model. It still implies the standard M-P model after settling,
despite the presence of leaky integrators and time-lags. It is the
introduction of new learning mechanisms that is important. The power in
this concept is associated with the "dendritic field" as a computational
device. He then suggested that we should look at methods of using neural
networks to understand non-linear PDE systems, and that these concepts
would be most easily attacked using these alternate NN models.

Sir John Eccles spoke next, at the dedication ceremonies for the Center
for Brain Research. His title was "How Evolving Dendritic Complexity in
the Mammalian Brain Opened It to the World of Feeling and Eventually to
Self-Consciousness." (I should note that Sir John is an avowed dualist,
and is seeking a mechanism for coupling the mind with the neural
network.) He pointed out that the synaptic bouton supports a femto-second
process that controls the release of the transmitter vesicle upon arrival
of the Ca wave so that a maximum of one vesicle is released, and release
only occurs at one in six boutons. He speculated that this process evolved
to conserve neurotransmitter, and that the mind couples to the brain by
changing the probability of release.

(I did some math afterwards and demonstrated that the process does not
conserve neurotransmitter, but rather decoupled the signal generated from
the amount of neurotransmitter available. The one in six probability of
vesicle release makes evolutionary sense if the probability of release can
be modulated. (Otherwise less synapses and a higher probability of release
is less costly.) This then connects this process with medium-term memory,
back-propagation, and the physical switching of synapses.)

In the afternoon, Kunio Yasue and Mari Jibu spoke on "The Basics of
Quantum Brain Dynamics." This analyzed the processing in the brain as a
quantum system involving two interacting subsystems. Some probing
indicated that the work applied at the molecular level.

Robert Dawes showed how emulation of a quantum process produced a
well-behaved and fairly strong tracking algorithm that could be
implemented on a neural net. ("Introduction to Advances in Quantum
Neurodynamics.")

Walter Schempp discussed coherent wavelet neural nets. Quantum mechanics
applied here as a model, entering via Gabor's logon in communications
between neurons. Schempp cited Singer and Gray's data on synchronization
of neural activity and temporal coherence in cortical information
processing. ("New Directions")

Michael Stadler spoke on synergetics and its relationship to
neurodynamics. ("Neurodynamics and Synergetics")

The second day of the conference began with Stuart Hameroff speaking on
data processing in the neuronal cytoskeleton. ("Nanoneurology") He went
over the evidence that the cytoskeleton was involved in information
processing and identified a number of possible mechanisms. It seems likely
that the cytoskeleton is involved in synaptic switching and the back
propagation of learning.

Adi Bulsara then spoke on "Models for Neural/Dendritic Coupling." He
showed some evidence that noise serves to stabilize the neuronal system.

Bruce McLennan spoke on "Emergent Computation in Neural Networks." He has
been investigating the computational characteristics of dendritic networks
(delays, amplification, projection of incoming spike trains to subbands,
functional computation) to clarify Pribram's model in this area.

The afternoon of the second day saw the presentation of three major
papers. Walter Freeman spoke on "Dynamics of Processing in Sensory Driven
Systems." I had been aware that he had demonstrated chaotic processing
dynamics in the olfactory bulb of rabbits, and his presence at the
conference had been a major reason I decided to attend, but it turned out
that his result was much more important than that. He appears to have
experimentally identified 1) the nature of qualia in mammals, and 2)
demonstrated that the neocortex in even the most primitive of placental
mammals operates with semantic concepts.

Freeman has been studying the olfactory system in rabbits, but his results
appear to apply to insectivores as well. The olfactory cells can identify
between 1000 and 10000 different odors, with most cells sensitive to
multiple odors. The olfactory system shows distributed activity. In large
numbers, these neurons are synchronized and produce the "gamma" wave,
which appears to reflect priming of the olfactory neurons to detect
specific odors. With each breath, the olfactory system goes unstable (a
Hopf bifurcation), and chaotically evolves to a conclusion as to the odors
present. The endorhinal complex can work with the olfactory bulb when
novelty is detected. The limbic system drives the priming of the system.
The olfactory data is distributed to all the cortical systems in parallel.

Freeman has found that the processing patterns in the nucleus of the
olfactory bulb are not sensory driven. Instead, they reflect the _meaning_
of the stimuli. (semantic!) Intentionality in hedgehogs. There is a basic
lack of invariance in the storage of mental images of past experience. Any
change in stimulus or expectation (etc.) causes changes in spatial
patterns. Note that these patterns are dependent on sensory input (closing
the nostrils results in no activity). There is no evidence that control
axons are involved. There appears to be a down-loading of linkages from
the neocortex to the olfactory bulb so that the sensory neurons can report
on the elements of the current world model. This downloading is to the
nucleus, since invariance is found on the surface of the bulb. 

Conclusion: sensory organs are loaded (in real time) with a meaningful
(semantic) representation of the environment.

This suggests to me that qualia contain a pointer to the corresponding
semantic object in the neocortex. In fact, lack of that pointer causes an
automatic orienting action. Even hedgehogs use semantic concepts.

Barry Richmond then spoke on "Information Processing in Sensory Driven
Neural Ensembles." (For him, two neurons are an ensemble 8)) He was
studying the visual system, trying to identify the code used by a single
neuron. His result is that each neuron transmits independent information
on each individual stimulus. The information transmitted in a given pulse
train appears to be 3-5 independent values, multiplexed and then used to
FM modulate a carrier signal. Neighboring neurons appear to have
synchronized carriers, but independent information (thus contradicting
Singer and Gray). He also found that ITC neurons were _not_ looking for
anything; rather they were encoding information about the scene. The
information transmitted by a single neuron appears to be the same in the
retinal fibers, the LGN, the PVC, and the ITC, although the temporal part
of the coding (secondary data components) becomes more important in the
later processing. No single feature is carried. 

On ensembles of size 2: adjacent neurons carry independent information,
with the signal about 20% correlated and the noise about 5% correlated.
Hence they appear to be different filters applied to the same data. 

Finally, Robert Desimone spoke on "Attention Driven Brain Systems." He was
studying how visual attention worked. He discovered that it seems to work by
inhibiting (gating) the processing outside of the field of interested. He
then investigated whether the system was gating inputs or cells and found
the former. He came to the conclusion that attentional control involved
parallel processing in a competitive model where every point in the visual
field was in competition for attention. He then investigated the
difference between the automatic and voluntary attentional systems. The
automatic system appears to involve the colliculus and the pulvinar, while
the voluntary system seemed to involve the pariental and frontal. The
automatic system appears to consist of two sets of cells, one holding the
current image, and the other holding up to hundreds of images. The cells
in the second set are stimulated if the element of the current image they
are checking does not match any of the corresponding elements of the last
few hundred images they have seen. This mechanism involves recency, not
working memory. Note also that a single cell can in parallel compare the
current image element with hundreds of previous image elements... (Stuart
Hameroff's paper on nanoneurology starts to look very interesting.) These
cells are _not_ novelty detectors. This is done in a portion of the
anterior ventral ITC.

Voluntary search is a different process. This is handled by a similar
process. There are two collections of cells, one holding the image, and
the other generating an inhibitory signal if the image element they are
checking does not match the desired target image. In other words, the
resultant of these two populations is a third array that is blank where
there is a mismatch and nonblank where they match.

My assessment of these three papers is that NIPS should invite Freeman,
Richmond, and Desimone to speak. They have important results that the
research community should be aware of.

Cheers,
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com



