Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!galileo.cc.rochester.edu!prodigal.psych.rochester.edu!stevens
From: stevens@prodigal.psych.rochester.edu (Greg Stevens)
Subject: Re: Is CONSCIOUSNESS continuous? discrete? quantized?
Message-ID: <1995Feb24.152139.3684@galileo.cc.rochester.edu>
Sender: news@galileo.cc.rochester.edu
Nntp-Posting-Host: prodigal.psych.rochester.edu
Organization: University of Rochester - Rochester, New York
References: <departedD3vKy5.M3B@netcom.com> <1995Feb21.222113.23970@galileo.cc.rochester.edu> <kovskyD4F01u.7qy@netcom.com> <1995Feb23.041001.26227@galileo.cc.rochester.edu> <kovskyD4Gz0n.DGC@netcom.com>
Date: Fri, 24 Feb 95 15:21:39 GMT
Lines: 35

In <kovskyD4Gz0n.DGC@netcom.com> kovsky@netcom.com (Bob Kovsky) writes:

[First, he summarizes my argument:]
>Your argument:  if human experience is a Turing-equivalent machine with 
>finite inputs, outputs and states, then it can be modelled on a 
>Turing-equivalent machine.

[Then he replies:]
>What if your model doesn't apply?  Acknowledged leaders in neuroscience, 
>such as G. M. Edelman and W. F. Freeman, conclude that your model doesn't 
>apply.  Most scientific models have a finite lifetime, after all.

If it doesn't apply it doesn't apply.  But if you are familiar with Freeman's
and Edelman's work enough to know they they conclude it doesn't, I'd
hope you would be familiar enough to explicate here, in brief, why it
doesn't [i.e. which of my premises were wrong, or whatever].  I am
curious to know.

Further, I was discussing the modelling of any given individual's behavior--
that is, given that an indivdiual has finite inputs outputs and states,
that individual could have a Turing-machine with equivalent output given
equivalent inputs.

However, making exact models of individual people is not what is required
for "behaving consciously."  In such a case, a probabilistic model which
sampled the relevent probailities arbitrarily close to some statistical
norm of "what people do" would be arbitrarily close to "behaving
consciouslly" and would still be a probabilistic model which, it could
be aruged, may not have subjectivity and would still retain the behavioral
criteria.

Greg Stevens

stevens@prodigal.psych.rochetser.edu

