From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Tue Jan 28 12:16:02 EST 1992
Article 3024 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Summary: The fallacy stands revealed
Message-ID: <1992Jan22.213820.20784@cs.yale.edu>
Date: 22 Jan 92 21:38:20 GMT
References: <1992Jan19.022136.29207@cs.yale.edu> <1992Jan19.211715.9777@bronze.ucs.indiana.edu> <6025@skye.ed.ac.uk>
Sender: news@cs.yale.edu (Usenet News)
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
Lines: 73
Nntp-Posting-Host: atlantis.ai.cs.yale.edu


  In article <6025@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

  >I sometimes think of Searle's argument like this:
  >
  >1. If strong AI is right, then the  Chinese Room understands
  >   Chinese.
  >
  >2. If the Room understands Chinese, it must be because the
  >   person in the room understands Chinese.
  >
  >3. But the person doesn't.
  >
  >4. So the room doesn't.
  >
  >5. So Strong AI is wrong.
  >
  >Note that most of the argument does not involve an assumption
  >that Strong AI is right, only that Strong AI implies that the
  >CR would understand Chinese (because it's running the right
  >program).

  >The systems reply attacks (2).  Searle tries to strengthen (2)
  >by saying he could memorize the program.  But Searle also says
  >some other things, such as: if the person doesn't understand,
  >how can the conjunction of the person and some pieces of paper
  >understand? 

The problem is that there is no content to (2) except the intuition
that Strong AI is wrong.  It's equivalent (is it not?) to saying
"Rooms aren't the sort of thing that can understand; people are."  But
that's precisely the point at issue.  If Strong AI is right, then
rooms, abacuses, computers, pencils pushed by clerks, and the economy
of Bolivia are all capable of sustaining computational processes that
constitute minds.  And, if we dispense with the external apparatus of
the room and the pieces of paper, we find Strong AI predicting that a
human being, by simulating multiple processes, can sustain multiple
minds.

In other words, the argument has the form

   1. If Strong AI (= there exists a program the execution of which
constitutes a mind) is right, then the Chinese Room understands
Chinese.

   2-4. But Strong AI is wrong.  (Because if the program is run on a
processor that already has a mind, the preexisting mind takes
precedence???  Generating a mind is like getting married, perhaps --
no bigamy allowed??)

   5. So Strong AI is wrong.

It just seems to be blindingly obvious to some people that Strong AI
is wrong.  Okay.  But embedding this intuition in an argument that
winds up "proving" the intuition is truly pointless.

  [Quoting Chalmers:]
  >> Instead of assuming strong AI as a
  >>premise, I think it's nicer (though equivalent, of course), to have
  >>"if strong AI, then P" as a definitional premise, show not-P, and
  >>conclude that strong AI is false.
  >
  >Just so.
  >
  >-- jd

It is indeed equivalent; the fallacy may be perceived in either version.

[The reference to the economy of Bolivia is a semihumorous allusion to
Ned Block's paper on functionalism and qualia.  Please do not spend a
lot of time attacking it.]

                                             -- Drew McDermott


