From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:54:53 EST 1992
Article 4393 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar6.012217.25722@news.media.mit.edu> <1992Mar6.214616.18384@psych.toronto.edu> <1992Mar10.204754.1137@gpu.utcs.utoronto.ca>
Message-ID: <1992Mar11.164816.18444@psych.toronto.edu>
Keywords: meaning, understanding
Date: Wed, 11 Mar 1992 16:48:16 GMT

In article <1992Mar10.204754.1137@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Mar6.214616.18384@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>>There seems to be a common misconception that the Chinese Room rests
>>on some esoteric notion of "understanding," or that we have to analyze
>>that concept to see what is going on in that situation.  As I (and others)
>>have repeatedly argued, this is entirely wrong.  The confusion seems to
>>arise because people want to determine how *from the outside* we would
>>know if the CR understands.  This is the wrong approach.  The whole point
>>of the CR argument is that *you* can actually carry out the computational
>>operations *yourself*.  The question is, If you do this, will you understand
>>Chinese in the way you understand other languages?  The answer is clearly "no."
>>No special analysis of understanding is required.
>>
>Problem, which I have tried to point out in the past, is in the content of
>the database for the Chinese squiggles. English word `hamburger` correlates in
>the English person's mind for instance with a mental picture of hamburger - the
>person had seen a hamburger in the past and knew this object was 'a hamburger'.
>If the database for Chinese squiggles had a picture of hamburger correlated 
>with the corresponding squiggle (and the same for other squiggles), would you
>still maintain that the person would not understand what he/she is doing?

Yes, I would still maintain it, if the picture were encoded, say, as pictures
are on a laser disc (a perfectly reasonable coding scheme for a computer to
use).  Can *you* understand .GIF files by "looking" at them?

It seems that the position you are taking and the issue you are addressing is
similar to Harnad's concerns with reference to "symbol grounding".  Like you,
Harnad accepts that, in the original Chinese Room situation, the person
wouldn't understand.  He believes that there in fact *is* no semantics in the
original CR case.  However, he also thinks this can be overcome through
the addition of sensor "front-ends", which are connectionist in nature and
represent the *analog* nature of the stimuli.  You might find that his work
crystallizes many of your concerns.

However, I don't think that this response addresses the problem, which is
*still* that syntax can't yield semantics.  Adding some kind of enriched
senory signal will not solve this, as Searle argued in his response to the
"Robot Reply". 
 

>If you insist that the person has to `understand` what the squiggles represent,
>you have to provide him/her with the same info about the squiggles as he/she
>has about English words.

I do not claim to have a complete answer as to how people attach meaning to
symbols.  *No one* claims to have solved this problem in a satisfying manner.
However, what we seem to have is a pretty good idea of how it *can't*
be solved, and one of these ways is by invoking pure syntax.  

- michael
 



