Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!galileo.cc.rochester.edu!prodigal.psych.rochester.edu!stevens
From: stevens@prodigal.psych.rochester.edu (Greg Stevens)
Subject: Re: Is CONSCIOUSNESS continuous? discrete? quantized?
Message-ID: <1995Feb25.180325.3029@galileo.cc.rochester.edu>
Sender: news@galileo.cc.rochester.edu
Nntp-Posting-Host: prodigal.psych.rochester.edu
Organization: University of Rochester - Rochester, New York
References: <departedD3vKy5.M3B@netcom.com> <1995Feb23.041001.26227@galileo.cc.rochester.edu> <kovskyD4Gz0n.DGC@netcom.com> <1995Feb24.152139.3684@galileo.cc.rochester.edu> <kovskyD4J82u.2vI@netcom.com>
Date: Sat, 25 Feb 95 18:03:25 GMT
Lines: 99

In <kovskyD4J82u.2vI@netcom.com> kovsky@netcom.com (Bob Kovsky) writes:
>In article <1995Feb24.152139.3684@galileo.cc.rochester.edu>,
>Greg Stevens <stevens@prodigal.psych.rochester.edu> wrote:
>>In <kovskyD4Gz0n.DGC@netcom.com> kovsky@netcom.com (Bob Kovsky) writes:

>>[First, he summarizes my argument:]
>>>Your argument:  if human experience is a Turing-equivalent machine with 
>>>finite inputs, outputs and states, then it can be modelled on a 
>>>Turing-equivalent machine.
>>
>>[Then he replies:]
>>>What if your model doesn't apply?  Acknowledged leaders in neuroscience, 
>>>such as G. M. Edelman and W. F. Freeman, conclude that your model doesn't 
>>>apply.  Most scientific models have a finite lifetime, after all.
>>
>>If it doesn't apply it doesn't apply.  But if you are familiar with Freeman's
>>and Edelman's work enough to know they they conclude it doesn't, I'd
>>hope you would be familiar enough to explicate here, in brief, why it
>>doesn't [i.e. which of my premises were wrong, or whatever].  I am
>>curious to know.

[Some very interesting, yet all consistent with a body of thinkers out there
generally referred to as "situated agent" or "autonomous systems" theorists,
stuff deleted for brevity]

>All these people are operating within established scientific traditions,
>employing the view that scientific models are "true" representations of
>reality and that mechanism and probability suffice to account for any and
>all phenomena.  My views are contrary, but I agree with these
>neuroscientists that the finite-state representational models of
>computerized artificial intelligence do not suffice to account for
>neuronal activity in animals, much less consciousness. 

Yet I can disagree with symbolic representationalism (as in fact I do)
and still believe that behaviors can be accounted for by a matrix of
observed responses-to-inputs.  That is, although I believe that we
consist of a mechanism that is adaptive rather than representing,
any adapting system can be INTERPRETED and talked about AS IF it were
representing.

[For example, an amoeba placed in a petri dish next to a glucose source:
the dissolved glucose molecules react with the membrane to make it more
elastic in the direction of increased glucose concentration, so the
cell pressure causes flow in that direction, all based on biochemical
equilibrium-seeking based on the amoeba's internal structural specifications;
but according to us, it perceives the glucose, knows the glucose is good
for it, so it moved toward it and englufs it.]

If we built something which merely correlated inputs with outputs, I do
not think it would be mimicking the PROCESS of how people work, but it
would be a difference process giving rise to the same OBSERVABLES.
This was my original point -- you could theoretically have consciousness-
like-activity arising from a process other-than-ours which is potentially
not-conscious.

>>However, making exact models of individual people is not what is required
>>for "behaving consciously."  In such a case, a probabilistic model which
>>sampled the relevent probailities arbitrarily close to some statistical
>>norm of "what people do" would be arbitrarily close to "behaving
>>consciouslly" and would still be a probabilistic model which, it could
>>be aruged, may not have subjectivity and would still retain the behavioral
>>criteria.

>Here's how you can compare your nervous system to a finite-state 
>machine.  Go into a room, turn off the lights, sit down in a comfortable 
>position and close your eyes.  Focus your attention on your breathing.  A 
>finite-state machine, without input other than a regular cycle, would 
>lock onto that cycle and engage in an endless loop.....

But 1) you are not receiving "no input" you are merely receiving darkness
input, and 2) in a system as complex as us, you have no idea how long it 
would take to lock into the looping cycle -- potentially, you death is
(from and information-processing and behavioral perspective) that 
infinite loop consisting of a loop of behaviors = {} (the empty set).

>...  Think about what Freeman or Edelman 
>would have have to say about it, and you will see that their theories 
>make much more sense.  

Based on your representation of their theories, I agree with them 
completely.  But now we are discussing the actual processes giving
rise to our behaviors, and the structures of those processes.  I
was saying originally that something using DIFFERENT processes could
come up with the same observable behaviors (such as an arbitrarily
large T-machine) but may not be consciouss.  Therefore, 
although consciousness implied conscious activity, conscious activity
does not imply consciousness, and so therefore there seems to be
very little causal role for consciousness in function.  That is,
it seems epiphenomenal and without advantage, behaviorally or
evolutionarily.

(unless you think that any machine with sufficient complexity to
produce the behavior has subjectivity by logical or structural
necessity]

Greg Stevens

stevens@prodigal.psych.rochester.edu

