From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:11 EST 1992
Article 4261 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Monkey Room
Organization: Department of Psychology, University of Toronto
References: <9203031955.AA11770@ucbvax.Berkeley.EDU> <68421@netnews.upenn.edu>
Message-ID: <1992Mar4.210902.28435@psych.toronto.edu>
Date: Wed, 4 Mar 1992 21:09:02 GMT

In article <68421@netnews.upenn.edu> weemba@libra.wistar.upenn.edu (Matthew P Wiener) writes:
>In article <9203031955.AA11770@ucbvax.Berkeley.EDU>, GUNTHER@WMAVM7 ("Mike Gunther") writes:
>>Suppose we are given a sealed room + teletype setup which has passed
>>a Turing Test.  We open the room and find only a monkey, hitting
>>teletype keys at random.  It just so happens that the monkey's
>>keystrokes produced "intelligent" conversation up until the time the
>>room was opened.
>
>>This thought-experiment seems to contradict several ideas-- the Turing
>>Test, behaviorism, functionalism, and the Systems Reply for starters.
>>Any comments?
>
>Yes.  It also contradicts reality.
>
>In other words, so what?

Well, it shows that the Turing Test is not infallible.  This in itself is
a useful reminder for folks here.  It also shows that arguments against
the "Turing Test" results that were posted here a while ago, in which
some laypeople thought programs were actually computers, are at best
ad hoc.  There is no *clear* way to conduct a Turing Test, and no way
that will yield perfect results.

- michael




