From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!micro-heart-of-gold.mit.edu!news.bbn.com!papaya.bbn.com!cbarber Mon Dec  9 10:47:56 EST 1991
Article 1852 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!micro-heart-of-gold.mit.edu!news.bbn.com!papaya.bbn.com!cbarber
>From: cbarber@bbn.com (Chris Barber)
Newsgroups: comp.ai.philosophy
Subject: Re: Neuron based neural nets
Message-ID: <3949@papaya.bbn.com>
Date: 4 Dec 91 16:10:14 GMT
References: <3942@papaya.bbn.com> <58114@netnews.upenn.edu>
Organization: BBN Systems and Technology, Inc.
Lines: 65

In article <58114@netnews.upenn.edu> 
        weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:

>In article <3942@papaya.bbn.com>, cbarber@bbn (Chris Barber) writes:
>>In article <1991Nov29.050859.21552@bronze.ucs.indiana.edu> 
>>        chalmers@bronze.ucs.indiana.edu (David Chalmers) writes that
>>> [Reeke and Edelman's criticism of connectionism is off center,
>>>  and does not even mention backpropagation.]
>
>>Maybe this is because backpropagation cannot be implemented with
>>real neurons!  In fact, most neural network paradigms, have nothing
>>to do with the way real neurons and brains work.  [...]
>
>Woah there!  Not so fast.
>
>Pellionisz and Llinas have dozens of papers on their "tensor network
>metaorganization theories" of the central nervous system.  They de-
>scribe feedback mechanisms for neural networks that are probably just
>backpropagation in some form or other--if not literally, then via some
>mathematical transform.  And their work is pretty closely tied to the
>real thing.

I have to admit that I have not read these papers.  There is little 
question that neural "learning" is due in part to some kind of feedback
method.  But it cannot be backpropagation because it is not consistent 
with the way neurons work.  Backpropagation would require that chemical
signals travel backwards through more than one layer of neurons. There
is just no way this can be done.  Do any of these papers claim that
backpropagation is actually found in the CNS or even that it is proveable
isomorphic to patterns of neural plasticity found in real organisism?

Real brains are made up of circuits, not simple unidirectional layers as in
many "neural" network paradigms.  Also, neurons in different parts of the
brain exhibit different degrees of plasticity (ability to change and "learn")
and vastly different circuitry. Even if something directly isomorphic to
backpropagation was found in one place in the brain, it would not
automatically follow that it occured anywhere else!  The reason that
backpropagation is so appealing is that it is relatively easy to model 
mathematically and computationally, it is not too hard to understand, and
it has worked successfully. However, it is also often criticized for its
slowness (and remember that since real neurons fire relatively slowly and
real neural learning must occur through chemical means, this would make it
painfully slow in the brain). Neural feedback loops, on the other hand are
very difficult to model mathematically (intractibly so for any decently
sized network), and computationally, and are very hard to understand. These
circuits are not easy to trace in the brain and cannot really be done in
a detailed enough manner to reveal the actual connections on a micro level.
It is not been clearly shown that macro models of real neural circuits
acurately reflect what is really going on in these circuits. So the simpler
model prevails...

I stand by my claim that MOST neural network paradigms have nothing to
do with real neural networks and that backpropagation is among this
crowd.  [I would, however, appreciate references for the articles you
mentioned....]  BTW, I am not necessarily defending Edelman's claims -
I just don't think citing his lack of mention of backprogation is a very
convincing argument against him.





-- 
Christopher Barber
(cbarber@bbn.com)


