From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!darwin.sura.net!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu Mon Dec  9 10:48:12 EST 1991
Article 1879 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!darwin.sura.net!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu
>From: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Newsgroups: comp.ai.philosophy
Subject: Re: Neuron based neural nets
Message-ID: <58348@netnews.upenn.edu>
Date: 5 Dec 91 16:01:03 GMT
References: <3942@papaya.bbn.com> <58114@netnews.upenn.edu> <3949@papaya.bbn.com>
Sender: news@netnews.upenn.edu
Reply-To: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Organization: The Wistar Institute of Anatomy and Biology
Lines: 87
Nntp-Posting-Host: libra.wistar.upenn.edu
In-reply-to: cbarber@bbn.com (Chris Barber)

In article <3949@papaya.bbn.com>, cbarber@bbn (Chris Barber) writes:
>>Pellionisz and Llinas have dozens of papers on their "tensor network
>>metaorganization theories" of the central nervous system.  They de-
>>scribe feedback mechanisms for neural networks that are probably just
>>backpropagation in some form or other--if not literally, then via some
>>mathematical transform.  And their work is pretty closely tied to the
>>real thing.

>I have to admit that I have not read these papers.  There is little 
>question that neural "learning" is due in part to some kind of feedback
>method.  But it cannot be backpropagation because it is not consistent 
>with the way neurons work.  Backpropagation would require that chemical
>signals travel backwards through more than one layer of neurons. There
>is just no way this can be done.  Do any of these papers claim that
>backpropagation is actually found in the CNS or even that it is proveable
>isomorphic to patterns of neural plasticity found in real organisism?

P&L do not mention "backpropagation" since their work precedes PDP.

I have no idea why you say that any neuronal backpropagation would
be chemical.

>			       Neural feedback loops, on the other hand are
>very difficult to model mathematically (intractibly so for any decently
>sized network), and computationally, and are very hard to understand. These
>circuits are not easy to trace in the brain and cannot really be done in
>a detailed enough manner to reveal the actual connections on a micro level.

In section 2.3.2 of the paper cited below, they describe how an already
wired in metric could be modified.  A difference between a desired goal
(what the senses were aimed at) and the physical reality (what the motor
response produced) produces a sensory check of the motor output, thus

	yielding the covariant components of the proprioception
	(performance) vector p. Note that [math stuff omitted]
	the difference ... can serve to correct the eigenvalues
	of the existing metric by adding the dyad formed by the
	climbing vector c.

And later, after working out an example:

	The dyadic product of the climbing fiber correction
	vector, c, can be impressed on the corticonuclear
	network as a whole, via climbing fibers that project
	both to the Purkinje cells and the cerebellar nuclear
	cells.

So they model feedback via climbing fibers, not chemistry.  Whether
this counts as backpropagation per se, I don't know.

>It is not been clearly shown that macro models of real neural circuits
>acurately reflect what is really going on in these circuits.

Well, sure, it's all open.  That's why I said "not so fast".

>							      So the simpler
>model prevails...

I'm unhappy with this recurrent application of Occam's razor.  You
can't say which model "prevails" before the results are in.  You can
point out which models are carefully grounded in biology and which
are not.

>I stand by my claim that MOST neural network paradigms have nothing to
>do with real neural networks and that backpropagation is among this
>crowd.  [I would, however, appreciate references for the articles you
>mentioned....]

They published their main papers in NEUROSCIENCE, culminating in
"Tensor Network Theory of the Metaorganization of the Functional
Geometries in the Central Nervous System",v16,n2,pp245-273,1985.

They describe how a system using covariant coordinate description of
the outside world for sensory input and contravariant coordinate de-
scription for motor response could be, uh, coordinated by a neuronal
network that implements a metric tensor, encoded in the inferior olive.
They go into quite interesting detail about how such a network could be
formed, used, and modified.  They have a factual basis for parts of
their theory, for example, the neuronal effects of inverted vision.

>	         BTW, I am not necessarily defending Edelman's claims -
>I just don't think citing his lack of mention of backprogation is a very
>convincing argument against him.

For sure.
-- 
-Matthew P Wiener (weemba@libra.wistar.upenn.edu)


