From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Mon Mar  9 18:35:21 EST 1992
Article 4276 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of Understanding
Message-ID: <1992Mar5.161144.16445@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <YdhGPPi00WBNI3x95G@andrew.cmu.edu>
Date: Thu, 5 Mar 1992 16:11:44 GMT

In article <YdhGPPi00WBNI3x95G@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
>Andrzej Pindor writes (in response to my post):
>
>My point, however, was slightly different than that, though I obviously
>didn't make it clear enough. Since the original post I responded to was 
>asking about how to look at subjective issues in an objective way (I think),
>rather than how to solve the 'other minds' problem, I wanted to convey
>the subjective aspects associated with an objective task.  Since the CR 
>responds to input through a set of rules, the database you're referring 
>to should be (and presumably is) encoded in those rules (unless I've completely
>missed the point of the long dead procedural/declarative debate of the '70's).
>However, the rules for answering any particular question about hamburgers,
>such as whether a particular restaurant served them, would certainly 
>not be obliged to contain all the info the system had about hamburgers 
>since the only requirement imposed by Searle is the performance requirement
>that the room's responses be indistinguishable from those of a native Chinese 
>speaker. Yet there are those who say that satisfying this requirement implies
>the room "understands", even when there is no access to this additional
>information. 
>
Are you suggest that anyone can reasonably expect the CR to know what hamburger 
looks like without having this information in its database? Do you expect 
a blind person to know what hamburger looks like? And yet he/she could 
understand the story, right?

>But let's alter the situation anyway.  Suppose there are rules 
>in the CR for recognizing nouns as well as additional rules that
>interface to a mechanical arm that draws their images.  Behaviorally,
>it would now be impossible to tell whether the CR "understood" according 
>to my original objective criterion for understanding, as you point out.  
>But think about whether you would know it was a hamburger if you were in the 
>room. Probably not at the beginning of the mechanical arm movements, though 
>presumably when it's finished you would -- in similar fashion to "connect 
>the dots" activities we've all done as children.  This is because in the
>particular case of a hamburger, there would be no information in
>the array of dots or the procedure for connecting them by number order
>that would indicate the final drawing is an image of a hamburger (no fair 

Does a series of electrical signals send by the brain to a hand to draw
a hamburger indicate that the final drawing is an image of a hamburger?

>imagining what it might be beforehand because that requires running the 
>connection process to some degree in your head).  Thus, if you don't
>know what the drawing will be before it's finished, how could the computer
>understand what it is doing, since *all* it's information is, in essence,
>in a connect-the-dots form which, *for it*, never gets into a form
>analogous to our images? 
>
Either we know in which form our brain gets images (or will know in future)
and then we can give the visual info to the CR in this form and your objections
become invalid, or we don't (and we don't) and then we can't expect the CR
understanding to have the visual component.
I readily admit that I do not know how a notion of hamburger is formed in the
brain (I do not think anyone does) and whether it has some semantic properties
(whatever these might be) or is it reducible to syntax, etc. That is why I think
that demanding a full human understanding from the CR is, at present stage,
unreasonable. 

>-Frank


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


