From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!ira.uka.de!gmd.de!fischer Mon Mar  9 18:34:56 EST 1992
Article 4235 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!ira.uka.de!gmd.de!fischer
>From: fischer@gmd.de (Mark Sebastian Fischer)
Subject: Re: Monkey Room
Message-ID: <fischer.699671283@gmd.de>
Sender: news@gmdzi.gmd.de (USENET News)
Nntp-Posting-Host: gmdzi
Organization: GMD, Sankt Augustin, Germany
References: <9203031955.AA11770@ucbvax.Berkeley.EDU>
Date: Wed, 4 Mar 1992 01:08:03 GMT
Lines: 38

In <9203031955.AA11770@ucbvax.Berkeley.EDU> GUNTHER@WMAVM7.VNET.IBM.COM ("Mike Gunther") writes:

>Suppose we are given a sealed room + teletype setup which has passed
>a Turing Test.  We open the room and find only a monkey, hitting
>teletype keys at random.  It just so happens that the monkey's
>keystrokes produced "intelligent" conversation up until the time the
>room was opened.

>This thought-experiment seems to contradict several ideas-- the Turing
>Test, behaviorism, functionalism, and the Systems Reply for starters.
>Any comments?

NO MACHINE (or arbitrary setup) CAN actually PASS THE TURING- TEST:
How many lines have to be processed for successful termination?
How many (and which) areas of questions should be addressed?

Nevertheless, the set of "intelligent entities" can be defined by the
functionalism of the Turing- Test: 
Those who would produce "intelligent" conversation for ANY "intelligent"
input. So, for MACHINES, the Turing-Test is to be made by analysing
(if necessary predicting) the functionality of it.
For biologic systems (monkeys as well as humans) the Turing-Test is
not applicable, at least because of the limited lifetime. And how
about gerontopathic phaenomena?

And last: What about the real-world-finite-time-version of the Turing-Test?
Why do people you interact with know you are intelligent?
Since all of us change through time, whenever someone says: XY is 
intelligent! he means: ... at this very moment!
Therefore, if the monkey (as seems most possible) starts typing garbage,
people would stop believing in his intelligence. On the other hand, if
a monkey would behave like a human for all his lifetime, who would not
admire his intelligence?

Conclusion: For the monkey (as humans) the only way to pass the 
Turing-Test is to die after a good performance. :-(

Mark Sebastian Fischer.


