From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!swrinde!cs.utexas.edu!utgpu!pindor Mon Mar  9 18:34:28 EST 1992
Article 4190 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!swrinde!cs.utexas.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Mar2.151229.13822@gpu.utcs.utoronto.ca>
Date: 2 Mar 92 15:12:29 GMT
References: <1992Feb24.231735.4404@gpu.utcs.utoronto.ca> <1992Feb25.013333.25452@psych.toronto.edu> <1992Feb25.183002.17341@gpu.utcs.utoronto.ca> <1992Feb27.211632.21398@psych.toronto.edu>
Organization: UTCS Public Access
Lines: 146


In article <1992Feb27.211632.21398@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Feb25.183002.17341@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>
>>If we had a human brain which from its infancy had only a TTY interface to
>>the outside world (like CR) and if it learned to communicate with us in
>>English (or Chinese, Hungarian etc), would it understand the story about a man
>>and a hamburger in the same way as we do?
>>Do you agree that its understanding of English would be _substantialy_ 
>>different from ours? Please say clearly!
>>If you insist that it would not be very different from ours, then we have 
>>nothing more to discuss, you can pres 'n'.
>>However if you agree that it would be very different, then how can you possibly
>>insist on unqualified application of the word to conclusions about CR?
>
>Because the computer *doesn't* learn only through a TTY interface.  It can
>have *all* of *your* world knowledge put into it.  It can have *all* of the

Well, how do you put _your_ world knowledge into a computer? If you insist on
computer receiving sensory info about outside world the same way as humans
receive it from their senses, then you have to admit that brain processes
_representations_ of  outside phenomena just as computer does. 

>experiences that you can *computationally* describe incorporated as
>part of its program.  Just because the communication method is solely
>by teletype doesn't mean the Chinese Room is limited to understanding
>the world in this way.
>
But would its understanding of the world be necessarily limited as compared to 
humans, due to a lack of human sensory input?  
In any case you did not answer the question I've posed: would a brain whose
knowledge of the world could only be acquired the same way as it can be 
acquired by a computer (I specified a TTY interface, but meant it generically;
feel free to assume any other method knowledge can be introduced into a
computer) have the same understanding of the world as you do? Would it or 
wouldn't it? I can see three possible reactions from you:
1. Yes, this brain's understanding would be substantially the same. If so,
   see below - I'll try to argue that that's a nonsense.
2. No, it would be quite different. But then you will have to agree that the
   a meaning of the word understanding is not  selfevident when applied to
   other than normal human situations. Hanard's premise that understanding is 
   understanding is understanding is obviously false.
3. You can avoid answering the question by various means available (pretend you
   didn't see the question, change the subject, pick on an irrelevant detail,
   etc).

Take your pick or show me that there is still another way out.
Now, in case you hold view 1), consider someone blind from birth. The person
could learn about electromagentic radiation, could learn that other people
can sense this radiation better than he/she (even a blind person can sense
some EM radiation as heat) and discriminate different wavelengths and they call
it colours. The person could learn all sort of stuff about colours and forma 
certain understanding of the concept but you will surely agree that his/hers
understanding of the colours will be different than ours (an easier example
might be people who are just colour-blind).

>>    My attempts to point out different levels of understanding are directed
>>at separating what we can reasonably expect CR to understand from aspects of
>>understanding which we can't possibly expect it to have. 

>>    If one is trying to argue that a machine running a syntactical analysis 
>>(like illustrated by CR) does not understand (say) English EXACTLY the same way 
>>an English speaking person does, then the whole CR construct is totally 
>>unnecessary.
>>Of course, it doesn't! It has never been to a restaurant, never eaten 
>>hamburger etc. If the story (about a man, restaurant and a hamburger, in case
>>you ask 'what story?') was presented to an Indian from Amazon jungle, would she/
>>he understand it _exactly_ the same way as we do? Even if she/he was explained
>>in her/his terms what words like 'hamburger', etc. mean so that she/he would
>>be able to answer the question?
>
>This changes nothing.  See Searle's original paper where he discusses this
>notion in his "robot reply".
>
The example above (with a blind person or just a color-blind person) should
convince you that Searle's critique of 'robot reply' is has no substance:
you will hopefully agree that receiveing information about colours in the way
he expects the person in CR to receive information will not help make the
person to understand colours the _same_ way as 'normal' people understand them.
Hence we should concentrate on understanding CR is realistically capable of
and not demand 'good old-fashioned' understanding.

>>Do you now understand (:-)) what some people (me including) mean when they say
>>that Hanard's trick was either silly or dishonest?
>
>No, I don't.  The issues are still the same as they ever were.

So you still think that we can insist on the same understanding by CR as by
a human and if it is not the same than it has no understanding at all?
>
>>Insisting on 'all or nothing' understanding by CR is dishonest.
>>What Searle (and his fans) are trying to say is: 'Since it does not have 
>>_exactly_ the same understanding capabilities as a human, its rubbish'. Hardly
>>a constructive approach. It certainly has _some_ understanding capabilities
>>and this indicates progress AI makes. Why not to admit it? Some people are
>>clearly very upset by this progress and are trying to dismiss AI completely
>>by showing that it has not achieved its most ambitious aims yet. It did not,
>>may be it never will, but we do not know and the Searle's argument is a shot
>>in a wrong 
>
>It seems that you are trying to tar the anti-AI crowd as Luddites who are
>railing against the possibility that people are merely computers.  This is

As far as I remeber, Luddites were rallying against machines for economical
reasons. I suspect that many in anti-AI crowd are motivated psychologically -
they can't swallow an idea that computers might be able to duplicate human
brains. They find the idea denigrating.
I am not trying to claim that computers _will_ be able to duplicate all
functions of human brains, we do not know enough about how brain works to make
such a statement. Nevertheless, many functions of the brain are being duplicated
by suitably programmed computers, including some aspects of understanding.
Vehement denials of this fact indicate emotional attitude alluded to above.  
I find Searle's argument methodologically flawed, fuzzy and often
mistaken. Take for instance a following statement from his paper (Minds,
Brains and Programs):
"Third, as I mentioned before, mental states and events are literally a product
of the operation of the brain, but the program is not in that way a product
of the computer"
He obviously does not seem to know much about computers, how they work and 
what happens when a computer runs a program.
Note that his stance also denies any possibility of understanding by anything 
except humans. Even if we were able to identify objectively mental states 
accompanying understanding in humans and hence be able objectively say whether
someone understands or not, we would be completely helpless when faced with
an alien life-form based on different physical principles.

>rather disingenuous, and misses the more profound points that Searle makes.
>
>Since many of the points you raise are dealt with in Searle's original
>article, or in the replies to it in Behavioural and Brain Sciences, I would
>suggest you go and read it before dismissing his position as rubbish.
>
Sorry, but I did not wait for your suggestion and read Searle's paper and his
critique of many replies to it (In 'Mind's I', is this good enough?) before
entering this discussion.

>- michael
>
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


