From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Wed Feb 26 12:54:31 EST 1992
Article 4009 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of understanding
Message-ID: <1992Feb25.183002.17341@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Feb22.181122.12088@oracorp.com> <6254@skye.ed.ac.uk> <1992Feb24.231735.4404@gpu.utcs.utoronto.ca> <1992Feb25.013333.25452@psych.toronto.edu>
Date: Tue, 25 Feb 1992 18:30:02 GMT

In article <1992Feb25.013333.25452@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Feb24.231735.4404@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>
>>a certain meaning of the stories the precise meaning of the words is irrelevant.
>>Hence one can understand certain aspects of the stories without knowing what
>>a 'hamburger' or a 'restaurant' is. May be I have not my point clear enough,
>>I ll try again.
>>
>
>[cute story deleted]
>
>>    In a similiar way, the story which is presented to CR can be considered 
>>a puzzle (together with the question) to be solved. Since there are more words
>>in the story than in the number example above (only three) and relationships
>>between these words are more complicated than the rules of the number system
>>above, you need a computer to solve the puzzle, but it can be done without
>>knowing precise meaning of the words. To solve arithmetic problems in the 
>>example above one did not need to know that pennies were round, made of copper
>>and that a pint of milk in those days used to cost a sixpence. And still
>>solving the arithmetic problems correctly indicated that one understood the
>>problems. Anyone against? Why is then incorrect to say that answering 
>>correctly the questions put to CR indicates that it understands a certain
>>meaning of the story, even if the person inside does not understand the exact
>>meaning of the words? Hanard's trick obscures totally the fact that the story
>>can be understood a different levels. A single word 'understand' does not 
>>distinguish between these different levels.
>>   There is a level at which the story can be understood without understanding
>>the words. Is this so difficult to understand (:-))?
>
>Look, gang, this *isn't* hard!  All that is required for the Chinese
>Room demonstration to work is that you agree that in that situation
>you wouldn't understand Chinese in *exactly* the same way you understand
>English, *whatever way that might be*.  The question is not whether
>the CR "understands" in some obscure way that we have previously
>unidentified - it is instead whether it *understands*, without quotes,
>in the good-old-fashioned sense of the word we use when we say, for
>example, "I don't understand Hungarian."  This is *all* that is required
>for the CR example to work.  We can discuss what the nature of understanding
>is, and if it is multifaceted or not, but that does not add *at all* to
>the CR debate.  There is no linguistic trick being played here, and
>those who suggest otherwise are either confused or being disingenuous.
>
Fact that you dismiss the example I have given as 'cute story' does not indicate
an attempt to understand what it was meant to illustrate. You keep avoiding
the issue of different levels of understanding and talk instead of good-old-
fashioned sense of the word 'understand'. However, when applied to CR it is
nonsense to indiscriminately use the word. 
Please tell me clearly if you agree with the following:
Word 'understanding' arose with application to human understanding. Human
understanding of language is intrinsically connected with sensory inputs and
how they correlate with words, situations and ideas. Agree so far?
If we had a human brain which from its infancy had only a TTY interface to
the outside world (like CR) and if it learned to communicate with us in
English (or Chinese, Hungarian etc), would it understand the story about a man
and a hamburger in the same way as we do?
Do you agree that its understanding of English would be _substantialy_ 
different from ours? Please say clearly!
If you insist that it would not be very different from ours, then we have 
nothing more to discuss, you can pres 'n'.
However if you agree that it would be very different, then how can you possibly
insist on unqualified application of the word to conclusions about CR?
    My attempts to point out different levels of understanding are directed
at separating what we can reasonably expect CR to understand from aspects of
understanding which we can't possibly expect it to have. 
    If one is trying to argue that a machine running a syntactical analysis 
(like illustrated by CR) does not understand (say) English EXACTLY the same way 
an English speaking person does, then the whole CR construct is totally 
unnecessary.
Of course, it doesn't! It has never been to a restaurant, never eaten 
hamburger etc. If the story (about a man, restaurant and a hamburger, in case
you ask 'what story?') was presented to an Indian from Amazon jungle, would she/
he understand it _exactly_ the same way as we do? Even if she/he was explained
in her/his terms what words like 'hamburger', etc. mean so that she/he would
be able to answer the question?
Do you now understand (:-)) what some people (me including) mean when they say
that Hanard's trick was either silly or dishonest?
Insisting on 'all or nothing' understanding by CR is dishonest.
What Searle (and his fans) are trying to say is: 'Since it does not have 
_exactly_ the same understanding capabilities as a human, its rubbish'. Hardly
a constructive approach. It certainly has _some_ understanding capabilities
and this indicates progress AI makes. Why not to admit it? Some people are
clearly very upset by this progress and are trying to dismiss AI completely
by showing that it has not achieved its most ambitious aims yet. It did not,
may be it never will, but we do not know and the Searle's argument is a shot
in a wrong direction.

>- michael
> 
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


