From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew Tue Jan 21 09:27:09 EST 1992
Article 2888 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Virtual Person?
Message-ID: <1992Jan19.022136.29207@cs.yale.edu>
Summary: Yet another attempt at clarifying Searle
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1992Jan16.054723.16068@bronze.ucs.indiana.edu> <1992Jan16.194359.1160@cs.yale.edu> <1992Jan16.204346.903@bronze.ucs.indiana.edu>
Date: Sun, 19 Jan 1992 02:21:36 GMT
Lines: 59

  In article <1992Jan16.204346.903@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
  >In article <1992Jan16.194359.1160@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
  >
  >>  Assume: There exists a program P such that a computational process
  >>carrying out P would constitute a mind.  ("Strong AI")
  >>
  >>  Assume: A human being could play the role of computer, and carry out
  >>N such computational processes (albeit slowly).  N might be 1, but it
  >>needn't be.
  >>
  >>  Then: N new minds would come into existence (by assumption 1)
  >>
  >>  But: The human wouldn't report acquiring N additional minds.  In
  >>particular, one of the predicted minds might understand Chinese, while
  >>the human might not.
  >>
  >>  Which is a contradiction.
  >
  >I don't think this is a very clear way to put the argument.  I'd replace
  >your "but" with:
  >
  >But: 1) The human wouldn't acquire additional minds.
  >     2) The "system" *certainly* wouldn't acquire additional minds (ha ha,
  >        what a ridiculous thought).
  >Therefore new minds do not come into existence.
  >Therefore contradiction.
  >

Can I ask for one more revision?  As it stands, the argument contains
the term "the system" at the end, without any prior mention.  To fix
it, I propose that the argument be codified thus:

Assume: There exists a program P such that a computational process
carrying out P on computer system S would constitute a mind.  ("Strong
AI")

Assume: A human being could play the role of computer system S, and carry out
such a computational process (albeit slowly).

Then: A new mind would come into existence (by assumption 1).  (N new
minds if S carried out N processes, but let's not muddy the waters.)

But: (1) S wouldn't *report* acquiring an additional mind.  In
particular, the predicted mind might understand Chinese, while the
human might not.
     (2) S *certainly* wouldn't *actually* acquire additional minds
(ha ha, what a ridiculous thought).

Contradiction.  And, as before, the problem lies with assumption 1.

Since the human and S are the same entity, there is no point in
repeating "S doesn't have a mind" twice.  If we're going to bifurcate
into point (1) and point (2) at all, the only distinction is between
S claiming to understand Chinese and S being the substrate
for a virtual person that does understand Chinese.

Are we ready to send this to the National Bureau of Standards?

                                             -- Drew McDermott


