From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!haven.umd.edu!uunet!mcsun!uknet!edcastle!aifh!bhw Mon Mar  9 18:35:23 EST 1992
Article 4279 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!haven.umd.edu!uunet!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Monkey Room
Message-ID: <1992Mar5.165203.383@aifh.ed.ac.uk>
Date: 5 Mar 92 16:52:03 GMT
References: <9203031955.AA11770@ucbvax.Berkeley.EDU> <68421@netnews.upenn.edu> <1992Mar4.210902.28435@psych.toronto.edu>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 47

I hope I have the attributions right:

Mike Gunther writes:
>>>Suppose we are given a sealed room + teletype setup which has passed
>>>a Turing Test.  We open the room and find only a monkey, hitting
>>>teletype keys at random.  It just so happens that the monkey's
>>>keystrokes produced "intelligent" conversation up until the time the
>>>room was opened.
>>>
>>>This thought-experiment seems to contradict several ideas-- the Turing
>>>Test, behaviorism, functionalism, and the Systems Reply for starters.
>>>Any comments?

Matthew P Wiener writes:
>>Yes.  It also contradicts reality.
>>
>>In other words, so what?

Michael Gemar writes:
>
>Well, it shows that the Turing Test is not infallible.  This in itself is
>a useful reminder for folks here.  It also shows that arguments against
>the "Turing Test" results that were posted here a while ago, in which
>some laypeople thought programs were actually computers, are at best
>ad hoc.  There is no *clear* way to conduct a Turing Test, and no way
>that will yield perfect results.

Can you find a single example, from Turing in 1950 till today, of an AI
researcher who has claimed that the Turing Test is infallible? Everybody
knows that is is _in principle_ possible that something could pass the
Turing test without having intelligence; what people who support the
test assume is that the _practical_ possibility that this could happen
is so improbable it can be conveniently disregarded.

If you can come up with a reasonably
probable situation in which something clearly unintelligent (by any
currently accepted definition) can pass the test, then you will have
found an important flaw in it. But if you can only come up with
unbelievably improbable situations (such as a monkey typing randomly and
through pure coincidence giving consistent answers; or
a person carrying out by hand (or in their head) the billions of
instructions of a program that converses in Chinese) then the AI
researchers don't need to give up their support of the test. How many
scientific tests can you think of that are completely and utterly
infallible? 

BW


