Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!kovsky
From: kovsky@netcom.com (Bob Kovsky)
Subject: Re: Is CONSCIOUSNESS continuous? discrete? quantized?
Message-ID: <kovskyD4J82u.2vI@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <departedD3vKy5.M3B@netcom.com> <1995Feb23.041001.26227@galileo.cc.rochester.edu> <kovskyD4Gz0n.DGC@netcom.com> <1995Feb24.152139.3684@galileo.cc.rochester.edu>
Date: Sat, 25 Feb 1995 01:28:06 GMT
Lines: 94
Sender: kovsky@netcom16.netcom.com

In article <1995Feb24.152139.3684@galileo.cc.rochester.edu>,
Greg Stevens <stevens@prodigal.psych.rochester.edu> wrote:
>In <kovskyD4Gz0n.DGC@netcom.com> kovsky@netcom.com (Bob Kovsky) writes:
>
>[First, he summarizes my argument:]
>>Your argument:  if human experience is a Turing-equivalent machine with 
>>finite inputs, outputs and states, then it can be modelled on a 
>>Turing-equivalent machine.
>
>[Then he replies:]
>>What if your model doesn't apply?  Acknowledged leaders in neuroscience, 
>>such as G. M. Edelman and W. F. Freeman, conclude that your model doesn't 
>>apply.  Most scientific models have a finite lifetime, after all.
>
>If it doesn't apply it doesn't apply.  But if you are familiar with Freeman's
>and Edelman's work enough to know they they conclude it doesn't, I'd
>hope you would be familiar enough to explicate here, in brief, why it
>doesn't [i.e. which of my premises were wrong, or whatever].  I am
>curious to know.

Skarda & Freeman, "How brains make chaos in order to make sense of the 
world," 10 Behavioral and Brain Sciences, 161-195 (1987):  "We think that 
the notion of 'destabilization' provides a better description of the 
essentials of neural functioning than the concept of pattern completion.  
In an alert, motivated animal, input destabilizes the system, leading to 
further destabilization and a bifurcation to a new form of patterned 
activity."  (p. 172)

Edlemann:  see <Neural Darwinism> and Reeke & Edelman, "Real Brains and 
Artificial Intelligence,"  Daedulus, Winter 1988.  <Neural Darwinism> at 
28:  "perception is adaptive rather than strictly veridical..."  At 210:  
"The alternative view taken here, that motor and sensory structures can 
be understood only as a coordinated selective system, leads to a sharply 
defined position concerning the relative roles of early signals in 
development and so-called higher events in the CNS:  selection by early 
signals in both motor and sensory systems acting <together> in a global 
mapping is considered to be crucial in solving the problem of adaptive 
perceptual categorization..."  Continuing at 211:  "In this view, 
selective matching between sensory and motor systems is not the result of 
independent categorization by the sensory areas, which <then> execute a 
program to activate motor activity, which is in turn controlled by 
feedback loops.  Instead, the results of motor activity are considered to 
be an integral part of the original perceptual categorization."  Reeke 
and Edelman, at 156:  "That no single neuron appears to be indispensable 
for any function suggests that only patterns of response over many 
neurons can have functional significance.  ...It can easily be calculated 
that there is not information in the DNA to specify uniquely the 
locations of all these neurons their connections.  Thus, indeterminate, 
dynamic, epigenetic mechanisms [organizing neuronal tissue during 
gestation -- RK] ... must operate during development to determine the 
fine structure of the nervous system."

See also, Calvin, <Cerebral Symphony>.

All these people are operating within established scientific traditions,
employing the view that scientific models are "true" representations of
reality and that mechanism and probability suffice to account for any and
all phenomena.  My views are contrary, but I agree with these
neuroscientists that the finite-state representational models of
computerized artificial intelligence do not suffice to account for
neuronal activity in animals, much less consciousness. 

>
>Further, I was discussing the modelling of any given individual's behavior--
>that is, given that an indivdiual has finite inputs outputs and states,
>that individual could have a Turing-machine with equivalent output given
>equivalent inputs.
>
>However, making exact models of individual people is not what is required
>for "behaving consciously."  In such a case, a probabilistic model which
>sampled the relevent probailities arbitrarily close to some statistical
>norm of "what people do" would be arbitrarily close to "behaving
>consciouslly" and would still be a probabilistic model which, it could
>be aruged, may not have subjectivity and would still retain the behavioral
>criteria.
>

Here's how you can compare your nervous system to a finite-state 
machine.  Go into a room, turn off the lights, sit down in a comfortable 
position and close your eyes.  Focus your attention on your breathing.  A 
finite-state machine, without input other than a regular cycle, would 
lock onto that cycle and engage in an endless loop.  [When I program I 
can produce endless loops without trying.]  See what happens to your 
nervous system.  People from yogic or buddhist disciplines call this 
practice "meditation" and the results are very different from what would 
result from a finite-state machine.  Think about what Freeman or Edelman 
would have have to say about it, and you will see that their theories 
make much more sense.  

-- 

*   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *   * 
    Bob Kovsky          |  A Natural Science of Freedom 
    kovsky@netcom.com   |  Materials available by anonymous ftp
                        |  At ftp.netcom.com/pub/fr/freedom
*   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *   * 
