From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Jan 21 09:27:13 EST 1992
Article 2895 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Virtual Person?
Message-ID: <1992Jan19.211715.9777@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan16.194359.1160@cs.yale.edu> <1992Jan16.204346.903@bronze.ucs.indiana.edu> <1992Jan19.022136.29207@cs.yale.edu>
Date: Sun, 19 Jan 92 21:17:15 GMT
Lines: 41

In article <1992Jan19.022136.29207@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:

>But: (1) S wouldn't *report* acquiring an additional mind.  In
>particular, the predicted mind might understand Chinese, while the
>human might not.

I don't think reporting plays any role in Searle's argument at all
(Searle makes a big deal about always adopting the first-person, not
the third-person perspective).  The first part of the argument is
simply that the person wouldn't *understand* Chinese, i.e. have a
Chinese-understanding mind, and I think that's non-controversial.

>     (2) S *certainly* wouldn't *actually* acquire additional minds
>(ha ha, what a ridiculous thought).

If S is meant to be the person, this seems to be true.  Not even the
"virtual person" advocate suggests that original person suddenly has
two minds; rather, another mind comes into existence inside their skull
(maybe a subtle distinction, but it's clear enough).

So I'd replace 1 and 2 by (1) The person wouldn't understand Chinese
(acquire a mind); (2) Certainly nothing else would understand Chinese
(acquire a mind).

I put the "understand Chinese" in as those are the terms in which Searle
talks; he doesn't argue much about "acquiring minds", though presumably
it comes to much the same thing.

>Are we ready to send this to the National Bureau of Standards?

Actually, I like my original version better.  This one seems inelegant.
In particular your insistence on phrasing it as a reductio makes it
more awkward than necessary.  Instead of assuming strong AI as a
premise, I think it's nicer (though equivalent, of course), to have
"if strong AI, then P" as a definitional premise, show not-P, and
conclude that strong AI is false.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


