From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!cornell!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Mar  9 18:35:07 EST 1992
Article 4253 of comp.ai.philosophy:
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!cornell!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
Newsgroups: comp.ai.philosophy
Message-ID: <YdhGPPi00WBNI3x95G@andrew.cmu.edu>
Date: Wed,  4 Mar 1992 14:36:59 -0500 
>From: Franklin Boyle <fb0m+@andrew.cmu.edu>
Subject: Re: Definition of Understanding

Andrzej Pindor writes (in response to my post):

>Please note that to answer the questions in Chinese, CR has to have a database
>of information about the objects in the story (as SHRDLU program uses). If
>you want to be able CR to draw a hamburger, you have to agree that the
database
>shoud also include visual information (like you have such an information,
>other-wise you won't be able to draw a tree, agreed?). Then, when encountering
>word 'hamburger' in the story CR would access all info it had about
hamburgers, 
>including what it looks like. Do you see any problem here? 

If I interpret what you're saying correctly, then no, I don't see a
problem; given the information you describe, it would now be behaviorally
impossible to tell whether the CR "understood" according to my
original objective criterion.

My point, however, was slightly different than that, though I obviously
didn't make it clear enough. Since the original post I responded to was 
asking about how to look at subjective issues in an objective way (I think),
rather than how to solve the 'other minds' problem, I wanted to convey
the subjective aspects associated with an objective task.  Since the CR 
responds to input through a set of rules, the database you're referring 
to should be (and presumably is) encoded in those rules (unless I've completely
missed the point of the long dead procedural/declarative debate of the '70's).
However, the rules for answering any particular question about hamburgers,
such as whether a particular restaurant served them, would certainly 
not be obliged to contain all the info the system had about hamburgers 
since the only requirement imposed by Searle is the performance requirement
that the room's responses be indistinguishable from those of a native Chinese 
speaker. Yet there are those who say that satisfying this requirement implies
the room "understands", even when there is no access to this additional
information. 

But let's alter the situation anyway.  Suppose there are rules 
in the CR for recognizing nouns as well as additional rules that
interface to a mechanical arm that draws their images.  Behaviorally,
it would now be impossible to tell whether the CR "understood" according 
to my original objective criterion for understanding, as you point out.  
But think about whether you would know it was a hamburger if you were in the 
room. Probably not at the beginning of the mechanical arm movements, though 
presumably when it's finished you would -- in similar fashion to "connect 
the dots" activities we've all done as children.  This is because in the
particular case of a hamburger, there would be no information in
the array of dots or the procedure for connecting them by number order
that would indicate the final drawing is an image of a hamburger (no fair 
imagining what it might be beforehand because that requires running the 
connection process to some degree in your head).  Thus, if you don't
know what the drawing will be before it's finished, how could the computer
understand what it is doing, since *all* it's information is, in essence,
in a connect-the-dots form which, *for it*, never gets into a form
analogous to our images? 

-Frank




