From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!uwm.edu!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Dec  9 10:47:27 EST 1991
Article 1802 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!uwm.edu!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu>
Date: 2 Dec 91 17:52:36 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 56

With respect to the Nonsense Room, Brian Yamauchi writes:

> The important difference is that the Chinese Room System is capable of
> performing the reasoning, remembering, and learning necessary to
> simulate human intelligence and the Nonsense Room System is not.  The
> foreign language aspect of the Chinese Room scenario is really a red
> herring -- the important thing is that the room's replies simulate
> those of an intelligent being.
 
> For example, compare the following two dialogues:

> <two example dialogs deleted>

Two things.  First, your analysis of the differences in responses 
of the two rooms is perfectly correct. You're also correct to 
point out that "the [Chinese] room's replies simulate those of an 
intelligent being". This latter point is true by definition. That is,
Searle has set up the Chinese Room to pass the Turing Test by allowing
that the rule book actually exists. However, the key word here is "simulate".
Searle's point is that there is no "understanding" by the person in the room, 
the room itself, etc. When someone says something to you in a language 
you "understand", you are doing more than formally processing symbol
strings between the time you hear what was said and your response.
For me, at least, I am conscious of mental images (sure, many of 
our responses seem "automatic" so that we aren't always aware of such
sensations).  These images come from internal representations of the
referents of the words we hear, information we acquire independent
of language (i.e. we have to see, feel, hear etc. the referents -- in
short, experience them).  It is this latter information which enables
us to "understand" in Searle's sense.  The point of the Nonsense Room
is that there is *definitely* no understanding to be had in that situation.
The symbols fed into the room as well as those in the book have no referents.  
So if you say there is understanding in the Chinese room, then you have
to show 
how it differs from the Nonsense Room *with respect to the system*.  Not 
with respect to what the people outside the room see, but what the system
"sees" or "understands". The reason the Chinese room looks intelligent to 
those feeding it input is that they have the internalized referential 
information (like you do) integrated with the Chinese language which they
understand.
Caveat: As noted above, a tacit assumption in Searle's scenario is that the 
      system is capable of passing the Turing Test, for Searle doesn't want
      to confound his argument about the system not "understanding"
      with other possible limitations.  But this actually is at the crux
      of the matter because it may be the case that the Turing Test
      (if it's possible to operationalize in the first place) cannot 
      be passed *without* such "understanding".

Second, Searle's use of Chinese was, I believe, his way of trying to
dispel any confounding intuitions the majority of the readership 
(those who don't know Chinese) might have had about the situation as a result 
of their knowledge of the chosen language -- that is, dispelling question- 
begging reasoning because of such intuitive notions such as: 'it seems like 
that's the way I do it, so...'.

-Frank


