Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!galileo.cc.rochester.edu!prodigal.psych.rochester.edu!stevens
From: stevens@prodigal.psych.rochester.edu (Greg Stevens)
Subject: Re: Is CONSCIOUSNESS continuous? discrete? quantized?
Message-ID: <1995Feb26.004656.16852@galileo.cc.rochester.edu>
Sender: news@galileo.cc.rochester.edu
Nntp-Posting-Host: prodigal.psych.rochester.edu
Organization: University of Rochester - Rochester, New York
References: <departedD3vKy5.M3B@netcom.com> <D4CBxB.I5z@ucc.su.oz.au> <kovskyD4 <burt.793699717@aupair.cs.athabascau.ca>
Date: Sun, 26 Feb 95 00:46:56 GMT
Lines: 70

In <burt.793699717@aupair.cs.athabascau.ca> burt@aupair.cs.athabascau.ca (Burt Voorhees) writes:

>>But such machines could in principle exist.  Humans have finite capacity to
>>distinguish sensory input along a finite number of sensory input channels
>>(sensory neurons) and thus has a finite number of possible inputs at any
>>given time; humans have finite precision with motor activity along a
>>inite number of motor output channels (motor neurons) and thus have a finite
>>number of possible outputs; humans live a finite amoutn of time and there
>>fore have a finite number of internal states.

>>Conclusion: with finite inputs, outputs and states, any human could be
>>THEORETICALLY modelled on a Turing-equivalent machine.

>Not necessairly so....

>....The problem for
>this machine is that whether it is deterministic
>or probabilistic, in is constrained to act in
>certain ways whereas it is not clear that the
>human it is suppose to be modelling is equally
>constrained.  The deterministic case can be tossed
>out immediatly since it would say, in state A you
>must do action B, while humans are notoriously contrary.

One could easily argue that we simply haven't rigorously
defined state A in enough detail, so we are unable to
distinguish it from some other state N which leads to
the contrary behavior C (which is not B), so that when WE
think there is a probabilistic A -> {B,C}, we are simply
not distinguishing two processes, A -> B and N -> C.
 
>On a different approach, one needs to consider the
>consequences if it were possible to model mind by
>a Turing machine.  Based on the Godel theorems this
>would imply that there was a definite limit to what
>could be known; i.e., there would be things that even
>in principle could never be known.  In fact, what
>could be known would be a set of measure 0 (speaking
>very loosely).  That kind of cuts the feet out from
>under the idea that we live in a rational universe
>which can be rationally comprehended by human reason,
>and that opens the door to all kinds of weird New Age
>irrationalism.  ("I'm psychic, I can get communications
>from beyond the Godel threshold!")  Etc.

Partially because I'm not following what is meant by the
set of what could be known being a set of measure 0,
I'm not following your argument here.  Its seems very clear
that there are things which can not be known (I guess, also
depending on what you mean by "known" -- if knowledge is
justified true belief, and we are justified in believing
mathematical theorems when there is a proof of the theorem,
then there are many theorems, which are not provable and not
disprovable, which can not be "known").

Further, I am once again not saying that I think the symbolic
representationalist approach inherant in the T-machine representation
is actually what we DO; I'm simply saying it could model conscious
behavior with an arbitrary degree of precision.  It is arguable
that such a machine would have no "knowledge" at all because it is
not conscious, but only acting in complete accordance with what we
know of conscious behavior.  That supports my argument that consciousness
can be seen as having no adaptive or functional advantage, because
although consciousness -> conscious behavior, the converse is not
true.

Greg Stevens

stevens@prodigal.psych.rochester.edu

