From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Mon Mar  9 18:34:53 EST 1992
Article 4230 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of understanding
Message-ID: <1992Mar3.211437.12307@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Feb25.183002.17341@gpu.utcs.utoronto.ca> <1992Feb27.211632.21398@psych.toronto.edu> <1992Mar2.151229.13822@gpu.utcs.utoronto.ca> <1992Mar2.174626.18508@psych.toronto.edu>
Date: Tue, 3 Mar 1992 21:14:37 GMT

In article <1992Mar2.174626.18508@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>In article <1992Mar2.151229.13822@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>
>>Well, how do you put _your_ world knowledge into a computer?
>
>In the same way that SHRDLU had *its* "world knowledge" put into it, namely,
>via the program, or a database that it accesses.  It seems to me that if
>you deny this possibility (of acquiring world knowledge without direct
>sensory input) then you have denied the computability (or at least computational
>nature) of our experiences.
>
You will agree (hopefully) that our brain receives sensory input through
electrical signals from eyes, ears, etc., right? Of course computer can receive
such same signals (digitized). However, our knowledge what the brain does with
these signals is very poor, we just do not know. The form in which SHRDLU 
receives
'word knowledge' is not very different from the form in which a blind person
may receive knowledge about colours and you will agree that this will not lead
to the same understanding of colours as seeing people have. That's why I think
that we should (for the moment) limit ourselves to discussing these aspects of
understanding which can be acquired on the basis of knowledge coded in the
way computers can understand (:-)).

>> If you insist on
>>computer receiving sensory info about outside world the same way as humans
>>receive it from their senses, then you have to admit that brain processes
>>_representations_ of  outside phenomena just as computer does. 
>
>1) I *don't* insists on computers receiving sensory info in the same way
>humans do.  See above.
>
Then you should not insist on computers having the same understanding a humans
have (see above).

>2) The conclusion only follows when you have demonstrated that
>   a) the brain processes involved are strictly equivalent to computer
>      processes, and

I think we both agree that present knowledge of the way brain handles signals
from senses is too poor for attempting to model it using computers. When such
knowledge is available, we might start to discuss how 'strict' this equivalence
should be, agreed?

>   b) that computers actually have representations, rather than simply
>      "patterns that an outside observer can *interpret* as referring to
>       something in the world."  The two are not identical. 
>
I am not sure what you are trying to say here. If computers were receiving
electrical signals from senses the same way as humans, the rest would depend 
on equivalence of processing. But this is covered in a). What is b) about?

>>>experiences that you can *computationally* describe incorporated as
>>>part of its program.  Just because the communication method is solely
>>>by teletype doesn't mean the Chinese Room is limited to understanding
>>>the world in this way.
>>>
>>But would its understanding of the world be necessarily limited as compared to 
>>humans, due to a lack of human sensory input?  
>
>Lack of sensory input is *not* the problem, as was noted in the
>original Chinese Room article.  These inputs, when converted into
>computational form, simply *do not* possess semantics.  
>
Since, as it seems to me, neither you nor anybody else can say how electrical
signals flowing from senses to the brain acquire 'semantics' and how does it
manifest itself physically, it is impossible to judge the above statement.
Robot Reply is based on premise that there is nothing magical in the way
the brain processes those signals so that computers can do it too (eventually)
and that the information processing is an only thing that counts. Searle seems
to imply that the hardware counts, i.e. information residing in wetware of
the brain can give rise to intentionality (does there exist a definition of it?)
whereas the same info residing in silicon chips can not.
Perhaps he is right, but I do not see that he proves it and I tend to think
that it can not be proven at all.

>>In any case you did not answer the question I've posed: would a brain whose
>>knowledge of the world could only be acquired the same way as it can be 
>>acquired by a computer (I specified a TTY interface, but meant it generically;
>>feel free to assume any other method knowledge can be introduced into a
>>computer) have the same understanding of the world as you do? Would it or 
>>wouldn't it? I can see three possible reactions from you:
>>1. Yes, this brain's understanding would be substantially the same. If so,
>>   see below - I'll try to argue that that's a nonsense.
>>2. No, it would be quite different. But then you will have to agree that the
>>   a meaning of the word understanding is not  selfevident when applied to
>>   other than normal human situations. Hanard's premise that understanding is 
>>   understanding is understanding is obviously false.
>>3. You can avoid answering the question by various means available (pretend you
>>   didn't see the question, change the subject, pick on an irrelevant detail,
>>   etc).
>>
>>Take your pick or show me that there is still another way out.
>
>It's simple.  I would argue 2).  And I would argue that this is not at all
>relevant to the Chinese Room problem, because of the arguments against the
>Robot Reply.

AS I've argued above, it is very relevant. See also below.
>
>[blind person could learn how to talk about colors, and yet not have
>same subjective experiences as seeing person.]
>
I find it very interesting that you say 'how to talk about colors'. Don't you
think that even a blind person could could be said to acquire _some_ 
understanding of colours? That is the whole point of my argument: understanding
is not all or nothing thing and, within our present abilities of coding and
interpreting information humans receive, we can only expect certain aspect of
understanding to be reproduced by computers.
>
>I think you misunderstand the Robot Reply.  I do indeed agree that the way
>in which the person in CR receives color information will not provide
>an understanding of color the same as a seeing person, but I take this to
>be further proof of the truth of his claim.  This is because, as far as
>*computation* is concerned, the way in which the information is received
>*makes no difference*.  If it does, then color can't be described
>computationally, and functionalism is therefore wrong (this is a slight
>overgeneralization, but will serve for present purposes).
>
See above. Also you (and Searle too) seem to take the word 'computation' too
narrowly. Do not forget that computation leads to certain states of the hard-
ware and these states influence subsequent computations and subseqent states
of the hardware. We have NO IDEA what the brain does with signals from the
senses and what influence do they have on physical states of the brain.

>If we *can't* insist on human understanding in the Chinese Room, then
>the Strong AI program is impossible a priori.
>
The fact that we do not yet know how human understanding in all its aspects
arise, does not mean that it will be impossible to reproduce. You will agree
that 20 years ago a lot of people would claim even the understanding
SHRDLU program demonstrates to be impossible for a machine. 

>Not at all, in my case.  I just find it, after a lot of careful thought and
>reasoned disputation, wrong (or, at least, poorly argued for).  Heck, I
>used to think Searle was wrong as well, until I wrote a term paper on
>the Chinese Room in a graduate Cognitive Science course.
>
I am glad to hear this - it is futile to discuss with an emotional stance.

>>I am not trying to claim that computers _will_ be able to duplicate all
>>functions of human brains, we do not know enough about how brain works to make
>>such a statement.
>
>This statement suggests that you don't understand the strong assumptions that
>Strong AI makes.  The claim is that we *will*, in principle, be able to 
>computationally reproduce *all* the functions of the human brain.  If we can't
>then strong AI is wrong.  Period. 
>
I've never said that I beileve in strong AI! I criticize Searle not because he 
assaults my belief (this would be an emotional stance), but for reasons stated
below.
 
>> Nevertheless, many functions of the brain are being duplicated
>>by suitably programmed computers, including some aspects of understanding.
>>Vehement denials of this fact indicate emotional attitude alluded to above.  
>
>They may simply indicate an emotional state of frustration at trying
>to explain why the the first sentence is wrong (or at least incomplete).
>
Are you claiming that AI research achieved nothing at all? Zero, null?
This would be silly! And why get frustrated? You are accusing AI people of
taking an extreme stance (we will _surely_ be able to reproduce *all* functions
of the brain) but take opposite extreme stance - NOTHING has been achieved.
Both stances (in particular in conjunction) do not create detached atmosphere
in which research can thrive.

>>I find Searle's argument methodologically flawed, fuzzy and often
>>mistaken. Take for instance a following statement from his paper (Minds,
>>Brains and Programs):
>>"Third, as I mentioned before, mental states and events are literally a product
>>of the operation of the brain, but the program is not in that way a product
>>of the computer"
>>He obviously does not seem to know much about computers, how they work and 
>>what happens when a computer runs a program.
>
>Searle doesn't have to know the details of computer science to critique
>its philosophical foundations.  What is necessary for the defenders of AI
>to do is to show how he misunderstands these foundations.  The above
>quote doesn't do that.

I'd think that most people would realize a difference between a program and its
effects on a computer, for instance a fact that when a computer runs a program,
it changes the (electrical) state of the hardware. This different states are
a product of the program execution. Why should a program be a product of
the computer? Searle is really confused here.
By the way, computers can modify their own programs!

>
>>Note that his stance also denies any possibility of understanding by anything 
>>except humans. Even if we were able to identify objectively mental states 
>>accompanying understanding in humans and hence be able objectively say whether
>>someone understands or not, we would be completely helpless when faced with
>>an alien life-form based on different physical principles.
>
>The second sentence does not follow from the first, and he *explicitly*
>denies the first.  All that is necessary is that the appropriate
>"causal powers" be instantiated in whatever physical form.  I, too,
>think that this retreat to "causal powers" is questionable.  But this 

Good for you, because this retreat to unspecified 'causal powers' is just words
which carry no information and in fact this is in conflict with his other
assertations that particular biological makeup of the brain is crucial for 
its functioning.

>positive argument about the way intentionality *is* produced has no
>bearing on his negative thesis about how it *isn't* produced, unless you
>can demonstrate that the former is a logical consequence of the latter (I
>don't believe that it is).
>
Again to make these claims about 'intentionality' you have to have some way of
recognizing it in other entities except yourself. Does he provide a way of
doing this? Not even a hand-waving method! Are you claiming that 'intentio-
nality' is something independent of a particular physical structure of the 
brain? Even he himself has not specified this 'intentionality' well enough
to make a claim like this. Correct me if I am wrong, otherwise these are all 
groundless speculations.

>>Sorry, but I did not wait for your suggestion and read Searle's paper and his
>>critique of many replies to it (In 'Mind's I', is this good enough?) before
>>entering this discussion.
>
>But it seems as though many of the points you raise (Robot Reply, the limitation
>of understanding to humans only) are dealt with in that article.  Perhaps
>a further perusal of the paper, maybe with the original critiques and his
>replies (Behavioral and Brain Sciences, 1980) would be useful.  This suggestion
>is not meant to be condescending, but is instead intended to be general
>advice to everyone involved, so that a lot of wasted bandwidth can be avoided.
>
I did reread it, critiques to it and his replies and definitely see his position
better, but also see better his confusion. Above I've pointed some of it.

>- michael
>
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


