From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:33:20 EST 1992
Article 4086 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Message-ID: <1992Feb27.211632.21398@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb24.231735.4404@gpu.utcs.utoronto.ca> <1992Feb25.013333.25452@psych.toronto.edu> <1992Feb25.183002.17341@gpu.utcs.utoronto.ca>
Date: Thu, 27 Feb 1992 21:16:32 GMT

In article <1992Feb25.183002.17341@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Feb25.013333.25452@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>>Look, gang, this *isn't* hard!  All that is required for the Chinese
>>Room demonstration to work is that you agree that in that situation
>>you wouldn't understand Chinese in *exactly* the same way you understand
>>English, *whatever way that might be*.  The question is not whether
>>the CR "understands" in some obscure way that we have previously
>>unidentified - it is instead whether it *understands*, without quotes,
>>in the good-old-fashioned sense of the word we use when we say, for
>>example, "I don't understand Hungarian."  This is *all* that is required
>>for the CR example to work.  We can discuss what the nature of understanding
>>is, and if it is multifaceted or not, but that does not add *at all* to
>>the CR debate.  There is no linguistic trick being played here, and
>>those who suggest otherwise are either confused or being disingenuous.
>>
>Fact that you dismiss the example I have given as 'cute story' does not indicate
>an attempt to understand what it was meant to illustrate. You keep avoiding
>the issue of different levels of understanding and talk instead of good-old-
>fashioned sense of the word 'understand'. However, when applied to CR it is
>nonsense to indiscriminately use the word. 
>Please tell me clearly if you agree with the following:
>Word 'understanding' arose with application to human understanding. Human
>understanding of language is intrinsically connected with sensory inputs and
>how they correlate with words, situations and ideas. Agree so far?

OK so far....

>If we had a human brain which from its infancy had only a TTY interface to
>the outside world (like CR) and if it learned to communicate with us in
>English (or Chinese, Hungarian etc), would it understand the story about a man
>and a hamburger in the same way as we do?
>Do you agree that its understanding of English would be _substantialy_ 
>different from ours? Please say clearly!
>If you insist that it would not be very different from ours, then we have 
>nothing more to discuss, you can pres 'n'.
>However if you agree that it would be very different, then how can you possibly
>insist on unqualified application of the word to conclusions about CR?

Because the computer *doesn't* learn only through a TTY interface.  It can
have *all* of *your* world knowledge put into it.  It can have *all* of the
experiences that you can *computationally* describe incorporated as
part of its program.  Just because the communication method is solely
by teletype doesn't mean the Chinese Room is limited to understanding
the world in this way.

>    My attempts to point out different levels of understanding are directed
>at separating what we can reasonably expect CR to understand from aspects of
>understanding which we can't possibly expect it to have. 
>    If one is trying to argue that a machine running a syntactical analysis 
>(like illustrated by CR) does not understand (say) English EXACTLY the same way 
>an English speaking person does, then the whole CR construct is totally 
>unnecessary.
>Of course, it doesn't! It has never been to a restaurant, never eaten 
>hamburger etc. If the story (about a man, restaurant and a hamburger, in case
>you ask 'what story?') was presented to an Indian from Amazon jungle, would she/
>he understand it _exactly_ the same way as we do? Even if she/he was explained
>in her/his terms what words like 'hamburger', etc. mean so that she/he would
>be able to answer the question?

This changes nothing.  See Searle's original paper where he discusses this
notion in his "robot reply".

>Do you now understand (:-)) what some people (me including) mean when they say
>that Hanard's trick was either silly or dishonest?

No, I don't.  The issues are still the same as they ever were.

>Insisting on 'all or nothing' understanding by CR is dishonest.
>What Searle (and his fans) are trying to say is: 'Since it does not have 
>_exactly_ the same understanding capabilities as a human, its rubbish'. Hardly
>a constructive approach. It certainly has _some_ understanding capabilities
>and this indicates progress AI makes. Why not to admit it? Some people are
>clearly very upset by this progress and are trying to dismiss AI completely
>by showing that it has not achieved its most ambitious aims yet. It did not,
>may be it never will, but we do not know and the Searle's argument is a shot
>in a wrong direction.

It seems that you are trying to tar the anti-AI crowd as Luddites who are
railing against the possibility that people are merely computers.  This is
rather disingenuous, and misses the more profound points that Searle makes.

Since many of the points you raise are dealt with in Searle's original
article, or in the replies to it in Behavioural and Brain Sciences, I would
suggest you go and read it before dismissing his position as rubbish.

- michael




