From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!gatech!mcnc!ecsgate!lrc.edu!lehman_ds Tue Jan 28 12:15:23 EST 1992
Article 2984 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!gatech!mcnc!ecsgate!lrc.edu!lehman_ds
>From: lehman_ds@lrc.edu
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan21.162606.131@lrc.edu>
Date: 21 Jan 92 21:26:05 GMT
References: <1992Jan16.194359.1160@cs.yale.edu>  <6025@skye.ed.ac.uk>
Organization: Lenoir-Rhyne College, Hickory, NC
Lines: 66

In article <6025@skye.ed.ac.uk>, jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
> In article <1992Jan19.211715.9777@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>>In article <1992Jan19.022136.29207@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>>
>>>But: (1) S wouldn't *report* acquiring an additional mind.  In
>>>particular, the predicted mind might understand Chinese, while the
>>>human might not.
>>
>>I don't think reporting plays any role in Searle's argument at all
>>(Searle makes a big deal about always adopting the first-person, not
>>the third-person perspective).  The first part of the argument is
>>simply that the person wouldn't *understand* Chinese, i.e. have a
>>Chinese-understanding mind, and I think that's non-controversial.
> 
> Hear, hear.
> 
> I sometimes think of Searle's argument like this:
> 
> 1. If strong AI is right, then the  Chinese Room understands
>    Chinese.
> 
> 2. If the Room understands Chinese, it must be because the
>    person in the room understands Chinese.
> 
> 3. But the person doesn't.
> 
> 4. So the room doesn't.
> 
> 5. So Strong AI is wrong.
> 
> Note that most of the argument does not involve an assumption
> that Strong AI is right, only that Strong AI implies that the
> CR would understand Chinese (because it's running the right
> program).
> 
> The systems reply attacks (2).  Searle tries to strengthen (2)
> by saying he could memorize the program.  But Searle also says
> some other things, such as: if the person doesn't understand,
> how can the conjunction of the person and some pieces of paper
> understand? 
> 
>>>Are we ready to send this to the National Bureau of Standards?
>>
>>Actually, I like my original version better.
> 
> So do I.
> 
>>In particular your insistence on phrasing it as a reductio makes it
>>more awkward than necessary.
> 
> I agree.
> 
>> Instead of assuming strong AI as a
>>premise, I think it's nicer (though equivalent, of course), to have
>>"if strong AI, then P" as a definitional premise, show not-P, and
>>conclude that strong AI is false.
> 
> Just so.
> 
> -- jd
   To say that if the room understands Chinese then the man inside the room
understands chinese, is a outright corruption of logic.  To say that if
I undertsand english then my liver understands english is completely 
ridiculous.  
   Drew Lehman
   Lehman_ds@mike.lrc.edu


