From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!lsuc!uunet.ca!uunet!zephyr.ens.tek.com!uw-beaver!cornell!rochester!yamauchi Mon Dec  9 10:48:08 EST 1991
Article 1874 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!lsuc!uunet.ca!uunet!zephyr.ens.tek.com!uw-beaver!cornell!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <YAMAUCHI.91Dec5041045@heron.cs.rochester.edu>
Date: 5 Dec 91 12:10:45 GMT
References: <AdBfkmC00WBME1JqUw@andrew.cmu.edu>
	<YAMAUCHI.91Nov30002306@magenta.cs.rochester.edu> <5768@skye.ed.ac.uk>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester
Lines: 16
In-Reply-To: jeff@aiai.ed.ac.uk's message of 2 Dec 91 22:02:14 GMT
Nntp-Posting-Host: heron.cs.rochester.edu

In article <5768@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>Ask yourself this: how does the Chinese Room know it's not the
>Nonsense Room?
>
>(Hint: not because some people outside the room can translate
>the responses, because the CR doesn't know they're doing that.)

See my reply to Franklin Boyle.  In my opinion, any system that could
pass the Turing Test would need to experience the types of mental
imagery that Franklin mentions.  In order to generate the necessary
responses, the CR will need to experience these images -- regardless
of whether its responses are translated or not.

So, how do you know the CR experiences these images?  Good question.
How do you know that anyone other than yourself experiences these
images?


