From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Mon Mar  9 18:35:49 EST 1992
Article 4316 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of Understanding
Message-ID: <1992Mar6.183841.25625@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Mar4.151700.15282@gpu.utcs.utoronto.ca> <1992Mar4.205642.26955@psych.toronto.edu> <1992Mar5.144815.11531@gpu.utcs.utoronto.ca> <1992Mar5.200322.219@psych.toronto.edu>
Date: Fri, 6 Mar 1992 18:38:41 GMT

In article <1992Mar5.200322.219@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar5.144815.11531@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>In article <1992Mar4.205642.26955@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>>In article <1992Mar4.151700.15282@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>
>>>>Please note that to answer the questions in Chinese, CR has to have a database
>>>>of information about the objects in the story (as SHRDLU program uses). If
>>>>you want to be able CR to draw a hamburger, you have to agree that the database
>>>>shoud also include visual information (like you have such an information, other-
>>>>wise you won't be able to draw a tree, agreed?). Then, when encountering word
>>>>'hamburger' in the story CR would access all info it had about hamburgers, 
>>>>including what it looks like. Do you see any problem here? 
>>>
>>>Nope, except for the implicit assumption that the way in which a hamburger
>>>looks would be represented in a fashion that the man in the CR could
>>>understand (can *you* read a videodisc?).
>>>
>>>The CR may know what a hamburger looks like.  The man *still* doesn't
>>>know what Chinese symbol refers to hamburger.
>>>
>>Why care?
>
>Because the issue at question is whether or not the CR process generates
>understanding in the *man*.   
>
>>The man is only a part of CR.
>
>This is an odd usage of the term "part," given that the whole CR is contained
>*inside* him.
>
Let's first start with original CR. Do you agree that the *man* is only a part
of the system? That CR to work it requires, besides the *man*, a database,
the rule book and all those 'scraps of paper' (basically operational memory)
and a good filing system (see my comments below). Now if you want to put all
this inside the *man*, you will have to agree that all this stuff will not
it into ordinary man's brain. If the *man* did memorize the database and the 
rule book and devoted large part of his memory to filing all those 'scraps of
paper' (in fact to fit all this stuff into the brain would require a consi-
derable enhancment of its capacity, you will agree), the original task 
(shuffling the symbols according to the rule book, writing 'scraps of paper
and filing them, and perhaps understanding the words if they are in English)
would only occupy a part of the *man's* brain. Large part of it would be
engaged in playing a role of those parts of CR which you got rid of. Got it?
A part of Searle's trick is to make an impression that the *man* is an only
important part of CR. It is not true. Even 'scraps of paper' are important,
CR would not work without them. You may use part of *man's* memory for the
same task, but it does not make them disappear. Searle tries to persuade you,
by stuffing them into *man's* head, that they are irrelevant.  
He obviously succedes. 
However, do you take the brain as a whole, or do you claim
that only cortex counts (or some other part)? 

>> Do you insist that your cortex (or whatever other
>>part of the brain) knows what a hamburger looks like? If the _CR_ knows
>>what a hamburger looks like (you seem to allow such a possibility) that's good
>>enough.
>
>This is merely equivocating on the word "know".  My encyclopedia has a 
>picture of a hamburger in it, but it seems ridiculous to say that the
>encyclopedia "knows" what a hamburger looks like.  
>
My words! So which part of the brain knows what hamburger looks like? Or is it
brain as a whole?
Note that we assumed that CR, reading the word hamburger, will access all
available information about hamburgers, picture of hamburger including, and can,
if necessary, draw what it looks like. Can your encyclopedia do it ?

>>One of the problems of the CR construct is that it sweeps a lot of important 
>>things under the carpet. Searle's argument leaves an unsuspecting person with
>>an impression that only two things count: the rule book (the program) and the 
>>man (executing agent). Searle only at the very begining mentions a database
>>of information which the program uses. As the discussion above (and postings 
>>which led to it) indicate the content of the database is crucial to the issue
>>of what degree of understanding we can expect from the CR. Additionally, there
>>is a problem of what happens when the CR reads the story. Searle chooses to
>>talk about 'slips of paper', creating by this choice of words an impression
>>that this is totally unimportant. He probably thinks himself that this is 
>>unimportant, showing yet again his ignorance of the way computers work. When
>>a computer (running SHRDLU program, or the CR) reads the original story it has 
>>to analyze it in such a way that when the question comes in it is able to 
>>answer it. It could not answer the question without reading the story first!
>>In other words, reading the story puts the computer into a different state, 
>>such that it is able to process incoming information (the question) in a way
>>that is different then before. Those 'slips of paper' can not be thrown into
>>wastebasket, they are a crucial part of the system, just as the database is.  
>
>All of the above is *completely* irrelevant for the CR example.  The actual
>architecture used, the addition of sensors, etc., makes no *principled*
>difference to the argument.  See the original BBS article.
>
I did see the original article. The actual architecure may make no *principled*
difference, but you seem to draw the conclusion that architecture as such is
unnecessary, that it makes no difference whether this architecture is there or
not. Searle's stress on the *man* only does create such an impression, he
might have persuaded himself that this is the case, but this is patently false.
If I want to make a journey from A to B, the actual means of transportation
may be unimportant (although if the journey might take more than my lifetime,
such a mode of transport would clearly be unacceptable), but *some* mode of
transportation is essential - without it the journey would not take place.
The original *man* and the man who memorized the database and the rule book and
uses his memory to keep the info created as a result of processing, ARE NOT
THE SAME THING. This *CR man* is capable of things the original one is not,
and these are not trivial things (try to do them yourself :-)). So how can
they be the same things? Can't you see this?

>- michael
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


