From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!ubc-cs!uw-beaver!micro-heart-of-gold.mit.edu!mintaka.lcs.mit.edu!yale!yale.edu!jvnc.net!nuscc!maclane!smol Wed Feb  5 11:57:09 EST 1992
Article 3494 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!ubc-cs!uw-beaver!micro-heart-of-gold.mit.edu!mintaka.lcs.mit.edu!yale!yale.edu!jvnc.net!nuscc!maclane!smol
iar
>From: smoliar@maclane.iss.nus.sg (stephen smoliar)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb5.005813.6383@nuscc.nus.sg>
Date: 5 Feb 92 00:58:13 GMT
References: <1992Jan31.193524.28969@psych.toronto.edu> <1992Jan31.233453.7625@news.media.mit.edu> <1992Feb3.113723.2519@arizona.edu>
Sender: usenet@nuscc.nus.sg
Reply-To: smoliar@iss.nus.sg (stephen smoliar)
Distribution: world,local
Organization: Institute of Systems Science, NUS, Singapore
Lines: 35

In article <1992Feb3.113723.2519@arizona.edu> bill@NSMA.AriZonA.EdU (Bill
Skaggs) writes:
>
>  Consider an arbitrary rock, and an arbitrary finite state
>automaton.  There exists a mapping from vibrational states
>of the rock to states of the FSA which preserves the state
>transition function of the FSA.  (The mapping is probably
>time-dependent, but so what?)  Under this mapping, the rock
>is performing the same computation as the FSA.
>
>  Therefore, if an FSA can be conscious, and consciousness is
>merely a matter of performing the right sort of computation,
>then a rock can be conscious.
>
>  What's wrong with this reasoning?
>
I would like to try to answer this a bit more simply than David did.  Let us
start by choosing an arbitrary rock;  call it R.  Then there is some set of
finite state automata which model the vibrational states of R.  Call that
set A(R).  The "arbitrary" finite state automaton you choose has to be a
member of this set.

Now consider some arbitrary entity which we are willing to agree is conscious.
Let us call that entity John_Searle (just so we have symbols for everything).
Then we may assume, for the sake of argument, that there is some set of finite
state automata which model the behavior of John_Searle;  and we can call that
set A(John_Searle).  The problem is that there is no reason to assume that the
sets A(R) and A(John_Searle) have a non-empty intersection, which means that
you cannot assume that anything about the behavior of ANY of the members of
A(R) will have anything to do with consciousness.
-- 
Stephen W. Smoliar; Institute of Systems Science
National University of Singapore; Heng Mui Keng Terrace
Kent Ridge, SINGAPORE 0511
Internet:  smoliar@iss.nus.sg


