From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!cs.yale.edu!mcdermott-drew Thu Jan 16 17:19:48 EST 1992
Article 2655 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Virtual Person? (was re: Searle and the Chine
Message-ID: <1992Jan10.213635.1884@cs.yale.edu>
Summary: Searle's primary fallacy 
Keywords: personal identity, searle
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: atlantis.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <!!5q-0+@rpi.edu> <335@tdatirv.UUCP> <5894@skye.ed.ac.uk>
Date: Fri, 10 Jan 1992 21:36:35 GMT
Lines: 48

  In article <5894@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
  >In article <335@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
  >
  >>Really, I see no reason to deny the Chinsese Room any of the attributes
  >>of personhood.
  >
  >But what is your reason for thinking it does have the attributes
  >of personhod?  Its behavior, just as Searle pointed out when
  >accusing the system-reply of question begging.
  >
  >>I consider semantics to be a necessary precondition to producing a convincing,
  >>unrestricted dialog with human beings.
  >
  >But again, this is assuming what's to be proved.

Mr. Dalton has it got it exactly turned inside out, as Daryl
McCullough explains very clearly in his reply:

  There is some confusion as to what exactly Searle thinks he is doing.
  ...
  If Searle claims that he can show that Strong AI is
  nonsense, he has to show that assuming Strong AI leads to a
  contradiction (or at least to an absurdity), and he hasn't done so. By
  claiming that the systems reply is begging the question, Searle is
  essentially saying "Only someone who already believes in Strong AI
  would believe in the systems reply". So what? If Searle believes he
  can show that Strong AI is nonsense, then he can certainly show that
  Strong AI plus the Systems Reply is nonsense. However, it is circular
  reasoning on the part of Searle if his argument is:

       1. Strong AI is nonsense, because the Systems Reply is nonsense.
       2. The Systems Reply is nonsense because it depends on Strong AI,
	  which is nonsense.

To put it another way: Searle wishes to show that it is absurd to
believe in Strong AI.  He does so by assuming Strong AI is true and
then drawing all sorts of conclusions that don't actually follow.
Having set up this smokescreen, he then turns around and says, What's
your evidence for Strong AI?  By this time the gullible have forgotten
that *he assumed it was true* to begin with.

                                             -- Drew McDermott








