From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!uwm.edu!linac!att!ucbvax!WMAVM7.VNET.IBM.COM!GUNTHER Mon Mar  9 18:34:51 EST 1992
Article 4227 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!uwm.edu!linac!att!ucbvax!WMAVM7.VNET.IBM.COM!GUNTHER
>From: GUNTHER@WMAVM7.VNET.IBM.COM ("Mike Gunther")
Newsgroups: comp.ai.philosophy
Subject: Monkey Room
Message-ID: <9203031955.AA11770@ucbvax.Berkeley.EDU>
Date: 3 Mar 92 19:56:45 GMT
Sender: daemon@ucbvax.BERKELEY.EDU
Lines: 15

(I just made this up, although it seems unlikely that I am the first
one to think of it.)

Suppose we are given a sealed room + teletype setup which has passed
a Turing Test.  We open the room and find only a monkey, hitting
teletype keys at random.  It just so happens that the monkey's
keystrokes produced "intelligent" conversation up until the time the
room was opened.

This thought-experiment seems to contradict several ideas-- the Turing
Test, behaviorism, functionalism, and the Systems Reply for starters.
Any comments?


--Mike


