From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken Mon Dec 16 11:01:58 EST 1991
Article 2128 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken
>From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
Newsgroups: comp.ai.philosophy
Subject: Re: From neurons to computation; an example worm
Message-ID: <40649@dime.cs.umass.edu>
Date: 15 Dec 91 00:00:10 GMT
References: <1991Dec14.110633.28844@oracorp.com> <eddy.692726825@beagle>
Sender: news@dime.cs.umass.edu
Organization: University of Massachusetts, Amherst
Lines: 56

In article <eddy.692726825@beagle> eddy@boulder.Colorado.EDU (Sean Eddy) writes:
>I've been reading this thread with a great deal of interest.  As a
>molecular biologist, it has been striking and eye-opening to me that
>there seems to be a great deal of resistance to the idea that we can
>learn a lot about the human brain (and, possibly, eventually, the
>human "mind") from simpler biological systems.
>

I have not seen anyone advance this argument. Instead, there has been an 
argument over whether the "all mental functions" are produced by
"characterizable processing elements" in the brain, and whether these
"elements" compute in a manner analogous to the operation of digital 
computation. One can reject the validity of this model of mind as
computation (calculation) without either advancing arguments about 
sprituality or rejecting the possibility of learning form simpler
organisms and even machines. Simply put: accepting that human beings
are constructed as purely material objects from molecules does not
lead to the conclusion that human thought is analogous to the operation
of computers. And furthermore, the undoubted fact that neurons carry
electrical signals does not imply that brains (and thinking) can
be understood as the operation of a soggy collection of transistors.

>I thought I might try to clarify some of Gordon Banks' points
>by better describing the model slug and worm nervous systems
>he's been bringing up.
>
>The slug in question is almost certainly _Aplysia californica_, a
>large and ugly marine snail. Aplysia has about 100,000 total neurons.
>Eric Kandel and coworkers at Columbia University have studied one
>particular neural circuit in fine detail, a circuit responsible for a
>gill withdrawal reflex. This circuit contains about 24 mechanosensory
>neurons, 6 motor neurons, and some other interneurons. The cells are
>large enough to impale and obtain intracellular electrical recordings.
>The circuit shows characteristics of habituation and sensitization,
>which Kandel has been able to study.
>

I believe that drawing firm conclusions about the workings of thought from 
this level of knowledge is ridiculous, and on a par with the 17th century
notion of humans and animals as clockwork devices. 

>
>Now, I'm not going to argue that we will discover much about human
>consciousness from how a worm writhes. But I do think that it's fair
>to say that we biologists don't yet fully understand some basic first
>principles of how a neuron functions or a set of neurons interconnect
>(*particularly* how they interconnect), and that we may as well learn
>those basic principles in the simpler and more accessible systems
>first. It may, in time, be necessary to posit new emergent properties
>to explain "mind" --- but I, and probably most other molecular
>biologists, am not going to worry terribly about that 'til I get
>there. :) 

And this  is my point. To suggest, as several posters have, that current
levels of knowledge are sufficient to support the AI model of thought
is nonsense. No-one knows how the damn things work.


