From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Sun Dec  1 13:06:05 EST 1991
Article 1694 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Keywords: ai philosophy searle expert system
Message-ID: <5732@skye.ed.ac.uk>
Date: 27 Nov 91 21:26:28 GMT
References: <1991Nov11.011527.28514@midway.uchicago.edu> <70105@nigel.ee.udel.edu> <5698@skye.ed.ac.uk> <71692@nigel.ee.udel.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 103

In article <71692@nigel.ee.udel.edu> lintz@cis.udel.edu (Brian Lintz) writes:
>In article <5698@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>
>>The circularity you identify above is simply not there, though
>>perhaps some other circularity is.
>
>Ok let's look at some other problems then.
>
>1. The rules. Is it possible to come up with a set of rules
>   that will allow the man to converse in Chinese effectively.
>   I really don't think this is possible. 

So much for strong AI of the sort Searle is arguing against.
If it's impossible, then strong AI loses, and Searle's argument
is, I suppose, unnecessary.

>2. Searle's Chinese room, the way it is stated above has nothing
>   to do with AI. The man is the CPU, and the rules are the program.
>   No one in AI would say that if a program was made that allowed a
>   computer to understand English while running the program that the
>   CPU of that computer when isolated understands English. But the
>   computer plus the program can understand English. Searle never
>   addresses this "system" argument.

But he _does_ address it, explicitly, several times.  I suspect
you are looking only at one of his writings in which he does not.

>Let's look at a different thought experiment. (This isn't mine BTW,
>but I'm not sure who to credit with this). Say we have a chip that 
>is the size of, and duplicates the function of a single neuron.
>(If Searle can have Chinese rules, we can have neuron chips).

Searle concludes, in effect, there's more to the brain and
involved in understanding than can be captured in a computer 
program.  Hence the famed "causal powers of the brain".
If the chips duplicate enough about the neurons, then the
required causal powers would remain.

>We can also initialize the chip to copy a current neuron in the brain. 
>Now, say we open up someones brain and start replacing his neurons
>with the chips one by one. The person is conscious and having a
>conversation at the time this is going on. Since the neurons are
>duplicated exactly there will be no change in the person. Eventually,
>every neuron will be replaced. How can you tell when the person ceased
>to be intelligent? You can't, so how can you tell he is not intelligent?

Why says he isn't?  Searle doesn't have to conclude that the person
would cease to be intelligent.  He could conclude that the artificial
neurons are just as good as real ones instead.  More on this below.

>If you say that it is because he has artificial neurons, then your
>argument is simply that only biological neurons can produce intelligence,
>which basically appeals to something mystical about the biological neurons.

That isn't my argument, but your conclusion would be false if it
were.  Biological neurons are made of different stuff and so have
different physical properties.  There's nothing mystical about that.

>If you say that at some point during the operation the person will stop
>talking or something, your argument is the same: even though the neurons
>are exactly duplicated, they don't work right because there is something
>mystical about the biological neurons that allow them to produce
>intelligence. 

It might well be the case that the person does stop talking.  Perhaps
your artificial neurons aren't good enough.  But it's your example,
and so you get to say what happens.  I take it that you'd say the
person does not stop talking.  Then Searle could argue that either
(1) real understanding has vanished, and if we looked into how these
artificial neurons differed from real ones (and knew a lot more about
brains and humans than we do now) we might be able to figure out why;
or (2) your neurons are so good that the causal powers of the brain
that result in true understanding are retained.  In case (2), Searle's
argument would (if correct) show that the artificial neurons must
have relevant properties that cannot be captured by a program.

If you want to claim there is real understanding with artificial
neurons and that the relevant properties of the artificial neuron
brain can all be captured by a computer program, then you're
begging the question (ie, assuming what was to be proved).

>So can you prove that the man is no longer intelligent without resorting
>to mysticism? 

I'm not in the business of proving that.  I'm just trying to
correct some misunderstandings of Searle's arguments.

>The point of this is to show that when you assume something that is
>very, very unlikely, like having a book that can give appropriate
>answers to Chinese questions, or having an artifical neuron exactly
>like a real neuron, you can make a convincing argument for just
>about anything provided no one realizes your assumption is very
>unlikely. This is what Searle did, and this is what I just did above.
>The difference is I don't claim that the argument above is worth anything
>while Searle and his supports claim his argument is.

I'd say you have it backwards.  The unlikely assumption is simply the
strong AI that Searle is trying to refute.  The book is the program
that's supposed to be sufficient for understanding.  If you want to
attack Searle's argument at that point, you have to argue that the
book of rules is not a fair representative for a program.

-- jd


