From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!uniol!unid Thu Jan 16 17:22:09 EST 1992
Article 2758 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!uniol!unid
o!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan15.185342.11589@aifh.ed.ac.uk>
Date: 15 Jan 92 18:53:42 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 34

>What I am saying in this thread is that
>Searle thinks the behavior is not possible without understanding.
>Maybe I'm wrong, of course, and a relevant quote from Searle
>would show that I am.  I will also look for such direct evidence
>on this point and let you know if I find it.

>From the reprint of  Searle's "Minds, Brains and Programs" in 
"The Mind's I" Hofstadter&Dennett, 1981:

p360 "But precisely one of the points at issue is the adequacy of the
Turing test. The example [the Chinese Room] shows that there could be
two "systems", both of which pass the Turing test, but only one of which
understands..."

p371 "we are tempted to postulate mental states in the computer similar
to human mental states. But once we see that it is both conceptually and
empirically possible for a system to have human capacities in some realm
without having any intentionality at all, we should be able to overcome
this impulse ... in this paper, I have tried to show that a system could
have input and output capabilities that duplicated those of a native
Chinese speaker, and still not understand Chinese, regardless of how it
was programmed"

I think these quotes are fair representations of Searle's position in
this paper: Searle is commited to the belief that the Chinese room
could exist, not merely proposing it for the sake of argument, otherwise
he has no basis for rejecting the Turing test, which he wants to do.

Oddly enough, the second quote comes from the paragraph where he accuses
the Turing test of being "unashamedly behaviouristic and
operationalistic". "Postulating mental states" in computers or in humans
is hardly a typical activity for behaviourists.

BW


