From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!pindor Mon Mar  9 18:35:19 EST 1992
Article 4273 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of Understanding
Message-ID: <1992Mar5.144815.11531@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <EdgwVzO00Uh7Q40SVx@andrew.cmu.edu> <1992Mar4.151700.15282@gpu.utcs.utoronto.ca> <1992Mar4.205642.26955@psych.toronto.edu>
Date: Thu, 5 Mar 1992 14:48:15 GMT
Lines: 51

In article <1992Mar4.205642.26955@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar4.151700.15282@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:

>>Please note that to answer the questions in Chinese, CR has to have a database
>>of information about the objects in the story (as SHRDLU program uses). If
>>you want to be able CR to draw a hamburger, you have to agree that the database
>>shoud also include visual information (like you have such an information, other-
>>wise you won't be able to draw a tree, agreed?). Then, when encountering word
>>'hamburger' in the story CR would access all info it had about hamburgers, 
>>including what it looks like. Do you see any problem here? 
>
>Nope, except for the implicit assumption that the way in which a hamburger
>looks would be represented in a fashion that the man in the CR could
>understand (can *you* read a videodisc?).
>
>The CR may know what a hamburger looks like.  The man *still* doesn't
>know what Chinese symbol refers to hamburger.
>
Why care?
The man is only a part of CR. Do you insist that your cortex (or whatever other
part of the brain) knows what a hamburger looks like? If the _CR_ knows
what a hamburger looks like (you seem to allow such a possibility) that's good
enough.
One of the problems of the CR construct is that it sweeps a lot of important 
things under the carpet. Searle's argument leaves an unsuspecting person with
an impression that only two things count: the rule book (the program) and the 
man (executing agent). Searle only at the very begining mentions a database
of information which the program uses. As the discussion above (and postings 
which led to it) indicate the content of the database is crucial to the issue
of what degree of understanding we can expect from the CR. Additionally, there
is a problem of what happens when the CR reads the story. Searle chooses to
talk about 'slips of paper', creating by this choice of words an impression
that this is totally unimportant. He probably thinks himself that this is 
unimportant, showing yet again his ignorance of the way computers work. When
a computer (running SHRDLU program, or the CR) reads the original story it has 
to analyze it in such a way that when the question comes in it is able to 
answer it. It could not answer the question without reading the story first!
In other words, reading the story puts the computer into a different state, 
such that it is able to process incoming information (the question) in a way
that is different then before. Those 'slips of paper' can not be thrown into
wastebasket, they are a crucial part of the system, just as the database is.  

>- michael
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


