From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!udel!cis.udel.edu Sun Dec  1 13:05:26 EST 1991
Article 1628 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!udel!cis.udel.edu
>From: lintz@cis.udel.edu (Brian Lintz)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Keywords: ai philosophy searle expert system
Message-ID: <71692@nigel.ee.udel.edu>
Date: 26 Nov 91 17:53:02 GMT
References: <1991Nov11.011527.28514@midway.uchicago.edu> <70105@nigel.ee.udel.edu> <5698@skye.ed.ac.uk>
Sender: usenet@ee.udel.edu
Organization: University of Delaware
Lines: 56
Nntp-Posting-Host: buster.cis.udel.edu

In article <5698@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>Searle starts with the premise that the man doesn't know Chinese.
>Then the man gets a bunch of rules that let him answer queries
>in Chinese.  And the man still doesn't know Chinese.  Conclusion:
>the rules didn't give the man the ability to understand Chinese.
>
>The circularity you identify above is simply not there, though
>perhaps some other circularity is.

Ok let's look at some other problems then.

1. The rules. Is it possible to come up with a set of rules
   that will allow the man to converse in Chinese effectively.
   I really don't think this is possible. 

2. Searle's Chinese room, the way it is stated above has nothing
   to do with AI. The man is the CPU, and the rules are the program.
   No one in AI would say that if a program was made that allowed a
   computer to understand English while running the program that the
   CPU of that computer when isolated understands English. But the
   computer plus the program can understand English. Searle never
   addresses this "system" argument.

Let's look at a different thought experiment. (This isn't mine BTW,
but I'm not sure who to credit with this). Say we have a chip that 
is the size of, and duplicates the function of a single neuron.
(If Searle can have Chinese rules, we can have neuron chips).
We can also initialize the chip to copy a current neuron in the brain. 
Now, say we open up someones brain and start replacing his neurons
with the chips one by one. The person is conscious and having a
conversation at the time this is going on. Since the neurons are
duplicated exactly there will be no change in the person. Eventually,
every neuron will be replaced. How can you tell when the person ceased
to be intelligent? You can't, so how can you tell he is not intelligent?
If you say that it is because he has artificial neurons, then your
argument is simply that only biological neurons can produce intelligence,
which basically appeals to something mystical about the biological neurons.
If you say that at some point during the operation the person will stop
talking or something, your argument is the same: even though the neurons
are exactly duplicated, they don't work right because there is something
mystical about the biological neurons that allow them to produce intelligence. 
So can you prove that the man is no longer intelligent without resorting
to mysticism? 

The point of this is to show that when you assume something that is
very, very unlikely, like having a book that can give appropriate
answers to Chinese questions, or having an artifical neuron exactly
like a real neuron, you can make a convincing argument for just
about anything provided no one realizes your assumption is very
unlikely. This is what Searle did, and this is what I just did above.
The difference is I don't claim that the argument above is worth anything
while Searle and his supports claim his argument is.

Brian Lintz
lintz@udel.edu


