From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew Thu Jan 16 17:22:13 EST 1992
Article 2764 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Virtual Person?
Message-ID: <1992Jan16.040733.23764@cs.yale.edu>
Summary: Searle did so assume Strong AI was true
Keywords: personal identity, searle
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <5894@skye.ed.ac.uk> <1992Jan10.213635.1884@cs.yale.edu> <5965@skye.ed.ac.uk>
Date: Thu, 16 Jan 1992 04:07:33 GMT
Lines: 76

  In article <5965@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
  >[Daryl McCullough had written:]
  >>  >>  There is some confusion as to what exactly Searle thinks he is doing.
  >>  ...
  >>  If Searle claims that he can show that Strong AI is
  >>  nonsense, he has to show that assuming Strong AI leads to a
  >>  contradiction (or at least to an absurdity), and he hasn't done so. By
  >>  claiming that the systems reply is begging the question, Searle is
  >>  essentially saying "Only someone who already believes in Strong AI
  >>  would believe in the systems reply". So what? If Searle believes he
  >>  can show that Strong AI is nonsense, then he can certainly show that
  >>  Strong AI plus the Systems Reply is nonsense. However, it is circular
  >>  reasoning on the part of Searle if his argument is:
  >>
  >>       1. Strong AI is nonsense, because the Systems Reply is nonsense.
  >>       2. The Systems Reply is nonsense because it depends on Strong AI,
  >>	  which is nonsense.
  >
  >Yes, it that _were_ Searle's arguement, then he would be employing
  >circular reasoning.  So what?
  >
  [And I (DVM) had put it this way:]
  >>Searle wishes to show that it is absurd to
  >>believe in Strong AI.  He does so by assuming Strong AI is true and
  >>then drawing all sorts of conclusions that don't actually follow.
  >>Having set up this smokescreen, he then turns around and says, What's
  >>your evidence for Strong AI?  By this time the gullible have forgotten
  >>that *he assumed it was true* to begin with.
  >
  [And finally back to Dalton:]
  >I find it difficult to recognize Searle's Chinese Room argument
  >in your description.  He starts by assuming that it's possible to
  >get the right behavior by running the right program.  But this is
  >just to make room for testing the claims of Strong AI.

It is devilishly hard to extract Searle's argument in a clear form.
But if we take a look at the Boden reprint of Searle's BBS article, we
find this:

  "...According to strong AI, ... the appropriately programmed
computer really *is* a mind, in the sense that computers given the
right programs can be literally said to *understand* and have other
cognitive states. [para. 1]
   ...
   When I hereafter refer to AI, I have in mind the strong
version....[para. 2]
   ...
   One way to test any theory of the mind is to ask oneself what it
would be like if my mind actually worked on the principles that the
theory says all minds work on. [para. 6]"

After which he attempts to draw conclusions he thinks we will agree
(a) follow from his assumption; (b) are false.  *Surely* this is an
attempt at a reductio ad absurdum from the assumption that Strong AI
is true.

[The water is muddied by the fact that paragraph 4 of the paper seems
to impute to the AI community a belief that Roger Schank et al.'s SAM
program from the mid-seventies is a complete theory of the mind.  But
this notion is surely not what he is trying to refute, or what anyone
is trying to defend.]

Dalton claims the structure of Searle's argument is 

  (a) Assume that it's possible to
  get the right behavior by running the right program.
  (b) ???
  (c) ...

I don't see how to continue, unless we shift the meaning of "strong
AI" from his original statement to some version of Turing Syndrome, a
hypothetical malady in which the sufferer swears undying faithfulness
to the idea that passing Turing's Test is criterial evidence for
intelligence.  

                                             -- Drew McDermott


