From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!hemuli.tik.vtt.fi!news.funet.fi!sunic!seunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:37 EST 1992
Article 2940 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!hemuli.tik.vtt.fi!news.funet.fi!sunic!seunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <6025@skye.ed.ac.uk>
Date: 20 Jan 92 23:49:41 GMT
References: <1992Jan16.194359.1160@cs.yale.edu> <1992Jan16.204346.903@bronze.ucs.indiana.edu> <1992Jan19.022136.29207@cs.yale.edu> <1992Jan19.211715.9777@bronze.ucs.indiana.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 59

In article <1992Jan19.211715.9777@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan19.022136.29207@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>>But: (1) S wouldn't *report* acquiring an additional mind.  In
>>particular, the predicted mind might understand Chinese, while the
>>human might not.
>
>I don't think reporting plays any role in Searle's argument at all
>(Searle makes a big deal about always adopting the first-person, not
>the third-person perspective).  The first part of the argument is
>simply that the person wouldn't *understand* Chinese, i.e. have a
>Chinese-understanding mind, and I think that's non-controversial.

Hear, hear.

I sometimes think of Searle's argument like this:

1. If strong AI is right, then the  Chinese Room understands
   Chinese.

2. If the Room understands Chinese, it must be because the
   person in the room understands Chinese.

3. But the person doesn't.

4. So the room doesn't.

5. So Strong AI is wrong.

Note that most of the argument does not involve an assumption
that Strong AI is right, only that Strong AI implies that the
CR would understand Chinese (because it's running the right
program).

The systems reply attacks (2).  Searle tries to strengthen (2)
by saying he could memorize the program.  But Searle also says
some other things, such as: if the person doesn't understand,
how can the conjunction of the person and some pieces of paper
understand? 

>>Are we ready to send this to the National Bureau of Standards?
>
>Actually, I like my original version better.

So do I.

>In particular your insistence on phrasing it as a reductio makes it
>more awkward than necessary.

I agree.

> Instead of assuming strong AI as a
>premise, I think it's nicer (though equivalent, of course), to have
>"if strong AI, then P" as a definitional premise, show not-P, and
>conclude that strong AI is false.

Just so.

-- jd


