From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Dec  9 10:48:15 EST 1991
Article 1886 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <5799@skye.ed.ac.uk>
Date: 5 Dec 91 18:34:48 GMT
References: <AdBfkmC00WBME1JqUw@andrew.cmu.edu> <YAMAUCHI.91Nov30002306@magenta.cs.rochester.edu> <5768@skye.ed.ac.uk> <YAMAUCHI.91Dec5041045@heron.cs.rochester.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 41

In article <YAMAUCHI.91Dec5041045@heron.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>In article <5768@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>Ask yourself this: how does the Chinese Room know it's not the
>>Nonsense Room?
>>
>>(Hint: not because some people outside the room can translate
>>the responses, because the CR doesn't know they're doing that.)
>
>See my reply to Franklin Boyle.  In my opinion, any system that could
>pass the Turing Test would need to experience the types of mental
>imagery that Franklin mentions.  In order to generate the necessary
>responses, the CR will need to experience these images -- regardless
>of whether its responses are translated or not.

How does the Room know that a given symbol refers to, say, cats and
not to cherries or phonegraph records?  How does it attach any meaning
to inputs from sensors (which are just more symbols)?  (See, eg, 
Searle's answer to the "robot reply".)

>So, how do you know the CR experiences these images?  Good question.
>How do you know that anyone other than yourself experiences these
>images?

I'm sorry, but I don't have time to answer this for the third
(fourth?, fifth?) time in a single discussion.  Briefly, I
don't know _for sure_.  But then I don't know _for sure_ that
my coffee cup is not the most intelligent being in the universe.
We have to set aside these skeptical possibilities (maybe we're
all brains in vats, maybe no one else exists, maybe no one else
is conscious, etc, etc).  

When we adopt a more reasonable standard of proof, I have good
(though not absolutely conclusive) reasons to conclude that
other humans are physically similar to me and that similar
behavior has similar causes and corresponds to similar internal
experience.  The reasons for concluding the same for machines
are less good, expecially since machines with the required
behavior have not yet appeared, so that we have little idea
how they'll work.

-- jd


