From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:30 EST 1992
Article 4290 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Monkey Room
Organization: Department of Psychology, University of Toronto
References: <68421@netnews.upenn.edu> <1992Mar4.210902.28435@psych.toronto.edu> <1992Mar5.165203.383@aifh.ed.ac.uk>
Message-ID: <1992Mar5.233543.28060@psych.toronto.edu>
Date: Thu, 5 Mar 1992 23:35:43 GMT

In article <1992Mar5.165203.383@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>I hope I have the attributions right:
>
>Michael Gemar writes:

  [in response to the Monkey Room example]

>>Well, it shows that the Turing Test is not infallible.  This in itself is
>>a useful reminder for folks here.  It also shows that arguments against
>>the "Turing Test" results that were posted here a while ago, in which
>>some laypeople thought programs were actually computers, are at best
>>ad hoc.  There is no *clear* way to conduct a Turing Test, and no way
>>that will yield perfect results.
>
>Can you find a single example, from Turing in 1950 till today, of an AI
>researcher who has claimed that the Turing Test is infallible? Everybody
>knows that is is _in principle_ possible that something could pass the
>Turing test without having intelligence; what people who support the
>test assume is that the _practical_ possibility that this could happen
>is so improbable it can be conveniently disregarded.

But, as far as I can see, there is still no widely accepted criterion as
to *what* the Turing Test even is.  For example, how long should it last?
Are there any restrictions on the topics discussed?  Are there any restrictions
on the way information is exchanged?  This seems to me to be all the more
relevant given the report on the net a few weeks back of the Turing "contest"
in which some people identified a program as human.  The complaints then
were that the topics were restricted, but there was no justification that
*I* can remember as to why this is not allowed, except ad hoc justifications.
Similarly for the amount of time spent interacting.

>If you can come up with a reasonably
>probable situation in which something clearly unintelligent (by any
>currently accepted definition) can pass the test, then you will have
>found an important flaw in it. But if you can only come up with
>unbelievably improbable situations (such as a monkey typing randomly and
>through pure coincidence giving consistent answers; or
>a person carrying out by hand (or in their head) the billions of
>instructions of a program that converses in Chinese) then the AI
>researchers don't need to give up their support of the test. How many
>scientific tests can you think of that are completely and utterly
>infallible? 

If you are comparing the Chinese Room to the Incredible Typing Monkeys,
then you have given up on AI, since the Chinese Room actually instantiates
a program, whereas the monkeys' responses are due to purely random chance.

I agree that one shouldn't expect a scientific test to be completely
infallible, and I agree that the Monkey Room is highly improbable.  However,
I believe that at least *some* of the people in this forum have made the
assumption that, if the appropriate responses are given, the *only* way
in which this could happen is by intelligence.  My point (and presumably
the point of the Monkey Room poster) is that this is not *necessarily*
the case.

- michael




