From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:34:33 EST 1992
Article 4198 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Message-ID: <1992Mar2.174626.18508@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb25.183002.17341@gpu.utcs.utoronto.ca> <1992Feb27.211632.21398@psych.toronto.edu> <1992Mar2.151229.13822@gpu.utcs.utoronto.ca>
Date: Mon, 2 Mar 1992 17:46:26 GMT


In article <1992Mar2.151229.13822@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>
>In article <1992Feb27.211632.21398@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Feb25.183002.17341@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>
>>>If we had a human brain which from its infancy had only a TTY interface to
>>>the outside world (like CR) and if it learned to communicate with us in
>>>English (or Chinese, Hungarian etc), would it understand the story about a man
>>>and a hamburger in the same way as we do?
>>>Do you agree that its understanding of English would be _substantialy_ 
>>>different from ours? Please say clearly!
>>>If you insist that it would not be very different from ours, then we have 
>>>nothing more to discuss, you can pres 'n'.
>>>However if you agree that it would be very different, then how can you possibly
>>>insist on unqualified application of the word to conclusions about CR?
>>
>>Because the computer *doesn't* learn only through a TTY interface.  It can
>>have *all* of *your* world knowledge put into it.  It can have *all* of the
>
>Well, how do you put _your_ world knowledge into a computer?

In the same way that SHRDLU had *its* "world knowledge" put into it, namely,
via the program, or a database that it accesses.  It seems to me that if
you deny this possibility (of acquiring world knowledge without direct
sensory input) then you have denied the computability (or at least computational
nature) of our experiences.

> If you insist on
>computer receiving sensory info about outside world the same way as humans
>receive it from their senses, then you have to admit that brain processes
>_representations_ of  outside phenomena just as computer does. 

1) I *don't* insists on computers receiving sensory info in the same way
humans do.  See above.

2) The conclusion only follows when you have demonstrated that
   a) the brain processes involved are strictly equivalent to computer
      processes, and
   b) that computers actually have representations, rather than simply
      "patterns that an outside observer can *interpret* as referring to
       something in the world."  The two are not identical. 

>>experiences that you can *computationally* describe incorporated as
>>part of its program.  Just because the communication method is solely
>>by teletype doesn't mean the Chinese Room is limited to understanding
>>the world in this way.
>>
>But would its understanding of the world be necessarily limited as compared to 
>humans, due to a lack of human sensory input?  

Lack of sensory input is *not* the problem, as was noted in the
original Chinese Room article.  These inputs, when converted into
computational form, simply *do not* possess semantics.  

>In any case you did not answer the question I've posed: would a brain whose
>knowledge of the world could only be acquired the same way as it can be 
>acquired by a computer (I specified a TTY interface, but meant it generically;
>feel free to assume any other method knowledge can be introduced into a
>computer) have the same understanding of the world as you do? Would it or 
>wouldn't it? I can see three possible reactions from you:
>1. Yes, this brain's understanding would be substantially the same. If so,
>   see below - I'll try to argue that that's a nonsense.
>2. No, it would be quite different. But then you will have to agree that the
>   a meaning of the word understanding is not  selfevident when applied to
>   other than normal human situations. Hanard's premise that understanding is 
>   understanding is understanding is obviously false.
>3. You can avoid answering the question by various means available (pretend you
>   didn't see the question, change the subject, pick on an irrelevant detail,
>   etc).
>
>Take your pick or show me that there is still another way out.

It's simple.  I would argue 2).  And I would argue that this is not at all
relevant to the Chinese Room problem, because of the arguments against the
Robot Reply.

[blind person could learn how to talk about colors, and yet not have
same subjective experiences as seeing person.]


>>>    If one is trying to argue that a machine running a syntactical analysis 
>>>(like illustrated by CR) does not understand (say) English EXACTLY the same way 
>>>an English speaking person does, then the whole CR construct is totally 
>>>unnecessary.
>>>Of course, it doesn't! It has never been to a restaurant, never eaten 
>>>hamburger etc. If the story (about a man, restaurant and a hamburger, in case
>>>you ask 'what story?') was presented to an Indian from Amazon jungle, would she/
>>>he understand it _exactly_ the same way as we do? Even if she/he was explained
>>>in her/his terms what words like 'hamburger', etc. mean so that she/he would
>>>be able to answer the question?
>>
>>This changes nothing.  See Searle's original paper where he discusses this
>>notion in his "robot reply".
>>
>The example above (with a blind person or just a color-blind person) should
>convince you that Searle's critique of 'robot reply' is has no substance:
>you will hopefully agree that receiveing information about colours in the way
>he expects the person in CR to receive information will not help make the
>person to understand colours the _same_ way as 'normal' people understand them.
>Hence we should concentrate on understanding CR is realistically capable of
>and not demand 'good old-fashioned' understanding.

I think you misunderstand the Robot Reply.  I do indeed agree that the way
in which the person in CR receives color information will not provide
an understanding of color the same as a seeing person, but I take this to
be further proof of the truth of his claim.  This is because, as far as
*computation* is concerned, the way in which the information is received
*makes no difference*.  If it does, then color can't be described
computationally, and functionalism is therefore wrong (this is a slight
overgeneralization, but will serve for present purposes).

>>>Do you now understand (:-)) what some people (me including) mean when they say
>>>that Hanard's trick was either silly or dishonest?
>>
>>No, I don't.  The issues are still the same as they ever were.
>
>So you still think that we can insist on the same understanding by CR as by
>a human and if it is not the same than it has no understanding at all?

If we *can't* insist on human understanding in the Chinese Room, then
the Strong AI program is impossible a priori.

>>>Insisting on 'all or nothing' understanding by CR is dishonest.
>>>What Searle (and his fans) are trying to say is: 'Since it does not have 
>>>_exactly_ the same understanding capabilities as a human, its rubbish'. Hardly
>>>a constructive approach. It certainly has _some_ understanding capabilities
>>>and this indicates progress AI makes. Why not to admit it? Some people are
>>>clearly very upset by this progress and are trying to dismiss AI completely
>>>by showing that it has not achieved its most ambitious aims yet. It did not,
>>>may be it never will, but we do not know and the Searle's argument is a shot
>>>in a wrong 
>>
>>It seems that you are trying to tar the anti-AI crowd as Luddites who are
>>railing against the possibility that people are merely computers.  This is
>
>As far as I remeber, Luddites were rallying against machines for economical
>reasons. I suspect that many in anti-AI crowd are motivated psychologically -
>they can't swallow an idea that computers might be able to duplicate human
>brains. They find the idea denigrating.

Not at all, in my case.  I just find it, after a lot of careful thought and
reasoned disputation, wrong (or, at least, poorly argued for).  Heck, I
used to think Searle was wrong as well, until I wrote a term paper on
the Chinese Room in a graduate Cognitive Science course.

>I am not trying to claim that computers _will_ be able to duplicate all
>functions of human brains, we do not know enough about how brain works to make
>such a statement.

This statement suggests that you don't understand the strong assumptions that
Strong AI makes.  The claim is that we *will*, in principle, be able to 
computationally reproduce *all* the functions of the human brain.  If we can't
then strong AI is wrong.  Period. 

> Nevertheless, many functions of the brain are being duplicated
>by suitably programmed computers, including some aspects of understanding.
>Vehement denials of this fact indicate emotional attitude alluded to above.  

They may simply indicate an emotional state of frustration at trying
to explain why the the first sentence is wrong (or at least incomplete).

>I find Searle's argument methodologically flawed, fuzzy and often
>mistaken. Take for instance a following statement from his paper (Minds,
>Brains and Programs):
>"Third, as I mentioned before, mental states and events are literally a product
>of the operation of the brain, but the program is not in that way a product
>of the computer"
>He obviously does not seem to know much about computers, how they work and 
>what happens when a computer runs a program.

Searle doesn't have to know the details of computer science to critique
its philosophical foundations.  What is necessary for the defenders of AI
to do is to show how he misunderstands these foundations.  The above
quote doesn't do that.

>Note that his stance also denies any possibility of understanding by anything 
>except humans. Even if we were able to identify objectively mental states 
>accompanying understanding in humans and hence be able objectively say whether
>someone understands or not, we would be completely helpless when faced with
>an alien life-form based on different physical principles.

The second sentence does not follow from the first, and he *explicitly*
denies the first.  All that is necessary is that the appropriate
"causal powers" be instantiated in whatever physical form.  I, too,
think that this retreat to "causal powers" is questionable.  But this 
positive argument about the way intentionality *is* produced has no
bearing on his negative thesis about how it *isn't* produced, unless you
can demonstrate that the former is a logical consequence of the latter (I
don't believe that it is).

>>rather disingenuous, and misses the more profound points that Searle makes.
>>
>>Since many of the points you raise are dealt with in Searle's original
>>article, or in the replies to it in Behavioural and Brain Sciences, I would
>>suggest you go and read it before dismissing his position as rubbish.
>>
>Sorry, but I did not wait for your suggestion and read Searle's paper and his
>critique of many replies to it (In 'Mind's I', is this good enough?) before
>entering this discussion.

But it seems as though many of the points you raise (Robot Reply, the limitation
of understanding to humans only) are dealt with in that article.  Perhaps
a further perusal of the paper, maybe with the original critiques and his
replies (Behavioral and Brain Sciences, 1980) would be useful.  This suggestion
is not meant to be condescending, but is instead intended to be general
advice to everyone involved, so that a lot of wasted bandwidth can be avoided.

- michael




