From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!cornell!uw-beaver!pauld Mon Mar  9 18:33:50 EST 1992
Article 4135 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!cornell!uw-beaver!pauld
>From: pauld@cs.washington.edu (Paul Barton-Davis)
Subject: Re: Definition of understanding
Message-ID: <1992Feb28.180829.6392@beaver.cs.washington.edu>
Sender: news@beaver.cs.washington.edu (USENET News System)
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: <1992Feb24.181821.19983@psych.toronto.edu> <1992Feb24.215328.18502@beaver.cs.washington.edu> <1992Feb25.224730.7021@psych.toronto.edu>
Date: Fri, 28 Feb 92 18:08:29 GMT

In article <1992Feb25.224730.7021@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Feb24.215328.18502@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>>In article <1992Feb24.181821.19983@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>[in a never-ending quest to explain the Chinese Room]
>
>>>You miss the point of the Chinese Room.  The question is *not* "How
>>>would an outside observer *tell* if the room understands?".  It is
>>>instead "Would a person carrying out the operations which give the
>>>*appearance* of understanding actually *have* it?"  The introspective
>>>aspect is *crucial* to the Chinese Room. It is *exactly* the issue under
>>>discussion, namely, whether doing the manipulations is sufficient to
>>>generate a *subjective* sense of understanding.  
>>
>>Assuming that such manipulations are even possible in theory, there is
>>no logical reason why a subjective sense of understanding should arise
>>from the same manipulations that produce an objective appearance of
>>understanding. Thus, there is no reason to suppose that manipulating
>>chinese symbols (as perhaps even native speakers do in some
>>interesting fashion, perhaps so interesting it isn't really symbol
>>manipulation at all) should give rise to "understanding".
>
>*BINGO!!!!!*  Give the man a cigar!  This is *exactly* Searle's point.
>
>However, for those who take passing the Turing test as a sufficient 
>demonstration of understanding, this is heresy.

OK, lets credit Searle with at least distinguishing between "strong"
and "weak" AI (I myself prefer "artificial" and "real" AI :-). I think
it foolish to argue that competent symbol manipulation alone can
instantiate a subjective sense of understanding of the manipulated
symbols. However, there is no reason why exactly the same physical
mechanisms that make manipulation of chinese symbols possible cannot
support a model of understanding as well, with the proviso that
"understanding" is a *description* of operation, not a property.
Searle, with his "causal powers" stuff, clearly doesn't think so, and
seeks to extend his argument to the point where it would deny this
possibility.

>>						  The whole point of the
>>>Chinese Room is to show that the Turing Test is insufficient to 
>>>determine if something truly has understanding.
>>
>>This depends on what one thinks saying that something "understands"
>>actually means.
>
>Once again, "No, it doesn't.  It depends on using 'understanding'
>in exactly the same way you and I do daily."  

But what special status does the way I use it daily have, and why ?
There are a great many things that I say daily that have no basis in
reality. Why should I pay any attention

>> If you take the view that all reports on brain/mental
>>activity are external (including introspective ones), then any such
>>terms are used in an "as-if" sense. This point of view says that there
>>isn't any difference between saying "its as if it understands" and
>>saying "it does understand", because there the property of
>>"understanding" is a part of our *description* (be it introspective or
>>otherwise) of a system in which "understanding" has any objective
>>existence at all. 
>
>Then you still have to explain why in some instances we *do* have a 
>subjective feeling of "understanding."  And why the person who has
>memorized the Chinese Room rules and carries them out doesn't (as
>everyone seems to agree).

No I don't. I don't disagree (how could I ?) that this subjective
feeling of understanding exists. However, recognizing its existence
within the phenomenology of my own brain states doesn't imply much.
The man who has memorized the rules (!) and carries them out doesn't
have a subjective sense of understanding because his brain is being
used *solely* for the purpose of symbol manipulation. This is why I
suggest adding a second man to the CR, and asking him if he thinks
that the system of which he is a part understands Chinese. He will
answer yes. I have a subjective sense of understanding English (und
etwas Deutsch) because I devote part of my brain's operations to
watching my own manipulation of English symbols. The part that does
this, does not, IHMO, have any functional differences to the part that
does symbol manipulation; in fact, one might even say that *it too*
does symbol manipulation, but in a different "language".

-- paul
-- 
Computer Science Laboratory	  "truth is out of style" - MC 900ft Jesus
University of Washington 		<pauld@cs.washington.edu>


