From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!tarpit!cs.ucf.edu!news Tue Apr  7 23:23:36 EDT 1992
Article 4859 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!tarpit!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Newsgroups: comp.ai.philosophy
Subject: Re: The Chinese Room (or Number Five's Alive)
Message-ID: <1992Mar31.182905.25732@cs.ucf.edu>
Date: 31 Mar 92 18:29:05 GMT
Article-I.D.: cs.1992Mar31.182905.25732
References: <7341@uqcspe.cs.uq.oz.au>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
Lines: 35

In article <7341@uqcspe.cs.uq.oz.au> matthew@cs.uq.oz.au (Matthew McDonald)  
writes:
|I'd like to ask Searle's co-religionists a different 
| (although fairly old) question.
| ...
| If you honestly believe Searle's story about the chinese room,
| how would you know that the artificial people didn't have feelings too?
| 
| 	To say that things that act like people aren't necessarily
| people is (essentially) solipsism. Can anyone who has philosophical
| objections to strong AI point out to me why their position is different
| to solipsism?
| 
I believe Searle would not deny the possibility of artificial people with  
feelings.  Rather if it pleases you to dissect your personal slave, then you  
would find not an nth generation digital computer, but some sort of analog  
goop.   Searle would have it that the analog goop would have to be organic just  
like the organic sponge in your head.

I am inclined to agree with Searle, but think that he goes too far in limiting  
things to organic goop.  The slave would contain some sort of structure whose  
detailed method of functioning would not be unknowable.  That is, it would not  
be like a machine or a simple software system where the results of changing a  
particular part is predictable.  There would be no way to improve performance  
by upping the clock rate or adding more memory because the clock and the memory  
could not be explicitly identified.  Were the AI implemented entirely in  
software, then the heap or the device drivers would be unidentifiable.

If such parts are identifiable, then I suspect, the system could not be visibly  
intelligent.  That is, of course, the problem with the Chinese room argument.   
By Searle's construction the parts are visible, so that the intelligence occurs  
on a time scale (geological?) that matters only to FSA rocks.

Maybe something like Issac Aismov's platinum-iridium sponge positronic brains.   
See his stories for a literary consideration of the issues you raise.


