From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!olivea!uunet!mcsun!uknet!edcastle!aifh!bhw Mon Mar  9 18:35:48 EST 1992
Article 4315 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!olivea!uunet!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Monkey Room
Message-ID: <1992Mar6.172309.24077@aifh.ed.ac.uk>
Date: 6 Mar 92 17:23:09 GMT
References: <68421@netnews.upenn.edu> <1992Mar4.210902.28435@psych.toronto.edu> <1992Mar5.165203.383@aifh.ed.ac.uk> <1992Mar5.233543.28060@psych.toronto.edu>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 51

In article <1992Mar5.233543.28060@psych.toronto.edu> 
michael@psych.toronto.edu (Michael Gemar) writes:

>But, as far as I can see, there is still no widely accepted criterion as
>to *what* the Turing Test even is. 

Has your short-sightedness prevented you from reading Turing's article?
One could say that by definition the Turing Test is the Test that Turing
described, quite clearly, in the article "Computing Machinery and
Intelligence", Mind, 1950.

> For example, how long should it last?

Turing doesn't give a simple time limit after which you _must_ conclude
that the machine is intelligent. He predicts that it will be acceptable
to talk about "thinking machines" when we have machines responses that
can't be distinguished from human responses more than 70% of the time
after five minutes conversation.

>Are there any restrictions on the topics discussed?  

Turing makes it clear that there should be no restrictions on the topics
discussed, describing the test as "suitable for introducing almost any
one of the fields of human endeavour that we wish to include [i.e. those
involving intellectual rather than physical capacities]"

> Are there any restrictions on the way information is exchanged? 

Turing suggests teletypes as ideal - the point is to avoid any physical
cues such as appearance, handwriting etc.

> This seems to me to be all the more
>relevant given the report on the net a few weeks back of the Turing "contest"
>in which some people identified a program as human.  The complaints then
>were that the topics were restricted, but there was no justification that
>*I* can remember as to why this is not allowed, except ad hoc justifications.

Well, perhaps you can now see that the objections were not ad hoc.

>I agree that one shouldn't expect a scientific test to be completely
>infallible, and I agree that the Monkey Room is highly improbable.  However,
>I believe that at least *some* of the people in this forum have made the
>assumption that, if the appropriate responses are given, the *only* way
>in which this could happen is by intelligence.

Is this belief based on the same sort of evidence as your beliefs about
the Turing test (i.e. none)? _Who_ has made this assumption? Or have they 
assumed that the only _reasonable_ way this could happen is by
intelligence? In which case, the monkey room example is pointless.

BW


