From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!wupost!micro-heart-of-gold.mit.edu!uw-beaver!pauld Wed Feb 26 12:54:14 EST 1992
Article 3983 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!wupost!micro-heart-of-gold.mit.edu!uw-beaver!pauld
>From: pauld@cs.washington.edu (Paul Barton-Davis)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb24.215328.18502@beaver.cs.washington.edu>
Date: 24 Feb 92 21:53:28 GMT
References: <6199@skye.ed.ac.uk> <6594@pkmab.se> <1992Feb24.181821.19983@psych.toronto.edu>
Sender: news@beaver.cs.washington.edu (USENET News System)
Organization: Computer Science & Engineering, U. of Washington, Seattle
Lines: 72

In article <1992Feb24.181821.19983@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <6594@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>>In article <6199@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>In article <1992Feb18.153833.10164@oracorp.com> daryl@oracorp.com writes:
>>>>In my opinion, Harnad was being silly. There is a common core of
>>>>meaning to the word "understand", which is that lack of competence in
>>>>a language implies lack of understanding. This is the case with my
>>>>lack of understanding of Hungarian (or Chinese). This common core of
>>>>meaning does not suffice to answer questions such as "does the Chinese
>>>>Room understand?"
>>>
>>>In my opinion, Harned has it exactly right.  Searle's point is that
>>>the Chinese Room "doesn't understand" in that same sense of understand.
>>
>>I fail to see how Harnad contributes anything to the question of whether
>>the Chinese Room understands anything, at least not in a way that supports
>>Searle. If you apply Harnad's test to the room, would the room not act like
>>someone that does understand Chinese (if the room functions)? And the
>>introspective aspect of the test (that you don't understand a thing of his
>>Hungarian uttering), which probably is the most impressing aspect of it,
>>doesn't contribute anything, since you can't introspect on someone else
>>or on the room. All that is left is mostly just the usual Turing Test.
>
>You miss the point of the Chinese Room.  The question is *not* "How
>would an outside observer *tell* if the room understands?".  It is
>instead "Would a person carrying out the operations which give the
>*appearance* of understanding actually *have* it?"  The introspective
>aspect is *crucial* to the Chinese Room. It is *exactly* the issue under
>discussion, namely, whether doing the manipulations is sufficient to
>generate a *subjective* sense of understanding.  

Assuming that such manipulations are even possible in theory, there is
no logical reason why a subjective sense of understanding should arise
from the same manipulations that produce an objective appearance of
understanding. Thus, there is no reason to suppose that manipulating
chinese symbols (as perhaps even native speakers do in some
interesting fashion, perhaps so interesting it isn't really symbol
manipulation at all) should give rise to "understanding". 

To grasp this further, I suggest the following modification to the CR:
add a 2nd person to the room. Let this person watch the other in
enough detail to be able to draw the same conclusion as those outside
the room - that based on the way the room manipulates chinese symbols,
chinese is understood. Additionally, give the 2nd person the ability
to 1) recognise the *form* of questions "do you understand chinese?"
and 2) the ability to fiddle with the first person's response.
Finally, for the duration of the experiment, have both persons
identify with the room (that is, they can both be addressed as "you").

I submit that the observed response, "Yes", is entirely akin to the
introspective report received from a native speaker. The machinery
behind the rooms ability to answer "yes" and mean it here are quite
different from those that enable it to speak chinese, and hence this
is a subtle variation of the System Reply of H&D.

						  The whole point of the
>Chinese Room is to show that the Turing Test is insufficient to 
>determine if something truly has understanding.

This depends on what one thinks saying that something "understands"
actually means. If you take the view that all reports on brain/mental
activity are external (including introspective ones), then any such
terms are used in an "as-if" sense. This point of view says that there
isn't any difference between saying "its as if it understands" and
saying "it does understand", because there the property of
"understanding" is a part of our *description* (be it introspective or
otherwise) of a system in which "understanding" has any objective
existence at all. 

-- 
Computer Science Laboratory	  "truth is out of style" - MC 900ft Jesus
University of Washington 		<pauld@cs.washington.edu>


