From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:46 EST 1992
Article 3076 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan23.214321.23872@aisb.ed.ac.uk>
Date: 23 Jan 92 21:43:21 GMT
References: <1992Jan19.211715.9777@bronze.ucs.indiana.edu> <6025@skye.ed.ac.uk> <1992Jan22.213820.20784@cs.yale.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 82

In article <1992Jan22.213820.20784@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>  In article <6025@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>  >I sometimes think of Searle's argument like this:
>  >
>  >1. If strong AI is right, then the  Chinese Room understands
>  >   Chinese.
>  >
>  >2. If the Room understands Chinese, it must be because the
>  >   person in the room understands Chinese.
>  >
>  >3. But the person doesn't.
>  >
>  >4. So the room doesn't.
>  >
>  >5. So Strong AI is wrong.

>  >The systems reply attacks (2).  Searle tries to strengthen (2)
>  >by saying he could memorize the program.  But Searle also says
>  >some other things, such as: if the person doesn't understand,
>  >how can the conjunction of the person and some pieces of paper
>  >understand? 
>
>The problem is that there is no content to (2) except the intuition
>that Strong AI is wrong.  It's equivalent (is it not?) to saying
>"Rooms aren't the sort of thing that can understand; people are." 

I don't agree that there's no content to (2) except the intuition
that Strong AI is wrong.  The intuition is that in the Chinese Room
it's going to be the person who understands, if anyting does, and
not, say, the pieces of paper.  The intuition that computers
can't understand is, to my mind, a different one.  Someone might
not have an opinion one way or the other about comouters, or might
even think computers could understand, until they encountered the
chinese Room argument.  It is a different way of thinking about
the problem, and not, for instance, that everyone would have 
thought of on their own.

We have to bear in mind that some people do find the Chinese Room
argument convincing, and others that do not nonetheless feel it
requires some thought.  And at least some of the time it's because
they think, as Searle suggests, yes, I could be in the room, and
be running the program, and I wouldn't understand.

Your view seems to be that Searle has just tricked people.  He's
disguised his real argument, or at least his assumption that strong
AI is true, and people haven't noticed.  (Or something like that.)
But I don't think that acconts for the way the Chinese Room has
been received.

>It just seems to be blindingly obvious to some people that Strong AI
>is wrong.  Okay.  But embedding this intuition in an argument that
>winds up "proving" the intuition is truly pointless.

I think you're right that people (at least many people) find it
blindingly obvious that, say, the Commonwealth of Massachusetts isn't
going to have a mind just because it might be executing an AI program.
On the other hand, they may have not realized that this was an
implication of Strong AI, or they may be willing to accept that
Massachusetts _could_ have a mind, if the right SF story were told
about it.

But if you try the Chinese Room argument with Massachusetts as the
person in the room, then the intuition might be "well, ok, the
individual citizens of Massachusetts don't know what's going on,
because they each see only part of the picture."  The systems reply
works a lot better, on the intuitive level, in this case.  Indeed, it
might take a Searle to come along and say "but look, one person could
do the same as all of Masachusetts, though it would take longer -- and
that one person could see the whole picture".

So I don't think all these "equivalent" things are really equivalent
in the way they work as arguments (or intuition pumps).

>[The reference to the economy of Bolivia is a semihumorous allusion to
>Ned Block's paper on functionalism and qualia.  Please do not spend a
>lot of time attacking it.]

I didn't even notice it until you mentioned it again.

-- jd


