From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!psuvax1!rutgers!mcnc!aurs01!throop Tue Jan 21 09:26:42 EST 1992
Article 2837 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!psuvax1!rutgers!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <60287@aurs01.UUCP>
Date: 17 Jan 92 17:28:46 GMT
References: <1992Jan16.040733.23764@cs.yale.edu> <1992Jan16.054723.16068@bronze.ucs.indiana.edu> <1992Jan16.194359.1160@cs.yale.edu> <1992Jan16.204346.903@bronze.ucs.indiana.edu>
Sender: news@aurs01.UUCP
Lines: 57

> chalmers@bronze.ucs.indiana.edu (David Chalmers)
>> mcdermott-drew@CS.YALE.EDU (Drew McDermott)

>>  Assume: There exists a program P such that a computational process
>>  carrying out P would constitute a mind.  ("Strong AI")
>>  Assume: A human being could play the role of computer, and carry out
>>  N such computational processes (albeit slowly).  [...]
>>  Then: N new minds would come into existence (by assumption 1)
>>  But: The human wouldn't report acquiring N additional minds.  In
>>  particular, one of the predicted minds might understand Chinese, while
>>  the human might not.  Which is a contradiction.

> I'd replace your "but" with:
> But: 1) The human wouldn't acquire additional minds.
>     2) The "system" *certainly* wouldn't acquire additional minds (ha ha,
>        what a ridiculous thought).

I think Drew was summarizing Searle's argument while taking into
account the systems reply and Searle's rebuttal to it.

I'll give these names to the points in the sequence of arguments
(my appologies if more standard names already exist):

   - The Chinese Room (the basic Searle argument involving a human 
                       and a book of rules)
   - The Systems Reply (saying that while the human doesn't understand,
                        the system of human-plus-rulebook does)
   - The Memorization Ploy (Searle's counter to the systems reply,
                            involving the memorization of the rulebook)
   - The Virtual Person Gambit (upon which I'll comment...)

Thus, I think David's "improvement" isn't one really: Drew was
encompasing "the system wouldn't aquire additional minds" because he's
already up to The Virtual Person Gambit, while David's improvement
only applies to The Systems Reply.

Searle (presumably) thinks that a single human body can't support multiple
minds, which point is needed to really make a contradiction and complete 
Drew's summary of his argument.

But there are counterexamples.  Consider Siamese twins.  Or consider
epilepsy patients "cured" by separating brain hemispheres.  In these
cases there is concrete and persuasive evidence of multiple minds.

The objection that these cases involve physically segregated brains
is relevant, but consider also multiple personality disorder.  The
claim can be made that MPD sufferers "really" have one mind, but this
is certainly not obvious, and there is (as I understand it) even
physical evidence (involving PET scans of brain function) that
supports the genuine multiple-mind position.

In any event, I myself am not convinced by Searle's Memorization Ploy,
mainly because of the Virtual Person Gambit, by which I mean that the
human may well aquire the additional minds, each mind's internal state
essentially mutually inaccessable.

Wayne Throop       ...!mcnc!aurgate!throop


