From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima Mon Dec  9 10:48:41 EST 1991
Article 1929 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Neuron based neural nets
Message-ID: <306@tdatirv.UUCP>
Date: 6 Dec 91 22:30:23 GMT
References: <3942@papaya.bbn.com> <58114@netnews.upenn.edu> <3949@papaya.bbn.com>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 17

In article <3949@papaya.bbn.com> cbarber@bbn.com (Chris Barber) writes:
|I have to admit that I have not read these papers.  There is little 
|question that neural "learning" is due in part to some kind of feedback
|method.  But it cannot be backpropagation because it is not consistent 
|with the way neurons work.  Backpropagation would require that chemical
|signals travel backwards through more than one layer of neurons. There
|is just no way this can be done.

I rather disagree here.  Something equivalent to backpropagation can be done
using either recurrent axons or counter-current neurons, or both.  Just
because this is not how manufactured NN's do it does not mean it is not
a viable alternative.  [Note this would imply a distinction between 'teaching'
and 'learning' neurons, but I do not insist that all neurons be equivalent].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



