From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Thu Jan 16 17:22:30 EST 1992
Article 2795 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Virtual Person?
Message-ID: <1992Jan16.194359.1160@cs.yale.edu>
Summary: Another try at clarifying Searle
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <5965@skye.ed.ac.uk> <1992Jan16.040733.23764@cs.yale.edu> <1992Jan16.054723.16068@bronze.ucs.indiana.edu>
Date: Thu, 16 Jan 1992 19:43:59 GMT
Lines: 44

  In article <1992Jan16.054723.16068@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

  >To be fair, I think that no matter what confusing statements Searle
  >makes, there is at least the skeleton of a noncircular argument
  >there, i.e.
  >
  >Premise 1: If strong AI is true, then there exists a program P such that
  >implementing P is sufficient for mentality.
  >
  >Premise 2: Any program P can be implemented Chinese-room style, without
  >being accompanied by mentality.
  >
  >Conclusion: Strong AI is false.
  >
  >Of course, the crucial point is premise 2.  But to be fair again, I don't
  >think that Searle accepts premise 2 only because he thinks strong AI is
  >false.  I think that he thinks the idea that the Chinese room has
  >mentality is independently ridiculous.

Let me try putting this argument in a clearer form, while preserving
your suggested clarification:

  Assume: There exists a program P such that a computational process
carrying out P would constitute a mind.  ("Strong AI")

  Assume: A human being could play the role of computer, and carry out
N such computational processes (albeit slowly).  N might be 1, but it
needn't be.

  Then: N new minds would come into existence (by assumption 1)

  But: The human wouldn't report acquiring N additional minds.  In
particular, one of the predicted minds might understand Chinese, while
the human might not.

  Which is a contradiction.

  Therefore one of our assumptions is wrong.  The second assumption is
uncontroversial, so the first must be at fault.  QED

Does everyone agree that this is Searle's argument in a nutshell?
Getting this kind of agreement would be an achievement. 

                                             -- Drew McDermott  


