From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Mon Dec 16 11:02:01 EST 1991
Article 2133 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2133 sci.philosophy.tech:1417 sci.philosophy.meta:847
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech,sci.philosophy.meta
Subject: Virtual Person? (was re: Searle and the Chinese Room)
Summary: no such thing
Keywords: personal identity
Message-ID: <1991Dec15.023122.6582@husc3.harvard.edu>
Date: 15 Dec 91 07:31:21 GMT
References: <1991Dec11.170157.27053@cs.yale.edu> <1991Dec11.203452.9419@psych.toronto.edu> <1991Dec13.204324.27948@cs.yale.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 94
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec13.204324.27948@cs.yale.edu> 
mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:

>In article <1991Dec11.203452.9419@psych.toronto.edu> 
>michael@psych.toronto.edu (Michael Gemar) writes:

MG:
>>>>Searle's point of the Chinese Room example is to show that
>>>>while you may know what an English symbol refers to, you do *not* know
>>>>what a Chinese symbol refers to, despite the fact that the behaviour is
>>>>the same, that the symbols are used appropriately in both cases.

DMD:
>>>At the risk of repeating myself (and McCarthy), it matters not whether
>>>*I* know, so long as the virtual person instantiated by the program knows.

MG:
>>But how do *they* know?  And what the heck do you *mean* by "virtual person"?
>>And why can't *I*, by performing the appropriate operations, instantiate
>>one myself?

>>- michael

DMD:
>Zeleny made a similar objection to McCarthy's original article, and
>there was not enough clarification, so let me try my hand.  I can see
>cases where individuating virtual persons would be difficult, but in
>the straightforward cases it's not hard at all.  Suppose that Searle's
>hypothetical Chinese understander is written.  We run it on a
>computer, and have a conversation with what appears to be a
>Chinese-speaking person.  Now suppose we run the same program twice,
>simultaneously, using the same I/O stream.  We'll initialize the
>databases of the two copies differently, so they will seem slightly
>different.  We can make sure they answer to different names.
>So we could have amusing hypothetical dialogues like:
>
>  Human: Hi, guys.
>  Yin: Hi, Drew
>  Yang: How have you been?
>  Human: Fine.  Hey, Yin, I have a joke you'll like, and that prude Yang
>     probably won't even get it.
>  Yang: Watch it
>  Yin: Don't listen to her -- go on.
>  Human: A traveling salesman went into a restaurant and ordered a ....
>
>And so forth.
>
>What I mean by virtual persons in such a straightforward case is simply
>the processes implementing Yin and Yang (and not any other process on
>the same machine, such as the X-window server, which doesn't have all
>the neat person-implementing properties that Searle is hypothesizing).

Although this is not the first time you take my name in vain, I have so far
abstained from addressing your arguments, for the simple reason that I
found your mistakes far less amusing or instructive than those made by
others.  In particular, the common misconception that extensional semantic
functions (never mind intensional things like belief functioning) can be
explained purely by reference to the syntax of the elements manipulated by
the program could be remedied by an elementary course in model theory, and
won't concern me here.  This mistake, as you note yourself, has been made
earlier by John McCarthy, who in contradistinction to you seems to be able
to appreciate issues of formal logic at least to the extent of avoiding
facile blunders of the above sort.  Philosophy of mind is a difficult
matter; unlike semantic theory, it doesn't lend itself to conclusive
resolution; however some of its arguments are relevant to your claims.

Being that I have other responsibilities in addition to doing philosophical
propaganda, I'll limit myself to some brief remarks, referring intersted
parties to articles on personsal identity and memory in the "Encyclopedia
of Philosophy" and the bibliography contained therein.  In short, personal
identity presupposes continuity of memory (this is neither a necessary nor
a sufficient condition thereof, but rather the best we can do after two and
a half millenia of philosophical inquiry), as well as first-person access
thereinto.  Another criterion of personal identity consists in the felt
continuity of volition, equally dependent on a first-person view.  In other
words, there exists no known way to individuate persons without first
granting their personhood, an assumption that would beg the question of
artificial intelligence.  Moreover, the conative criterion relies on the
assumption of free agency, which is often considered untenable by AI
theorists.  In other words, you are in no position to posit virtual
personhood.

MG:
>>And why can't *I*, by performing the appropriate operations, instantiate
>>one myself?

DMD:
>You can!  You can even instantiate two or more at the same time.

Michael could instantiate a playacting persona with relative ease; yet
instantiating a genuine person could only be achieved at the cost of
fragmenting the host personality.  Nice try, but no cigar.

>                                             -- Drew McDermott


