From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!utgpu!pindor Tue Mar 24 09:55:24 EST 1992
Article 4432 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of understanding
Message-ID: <1992Mar12.162059.28643@gpu.utcs.utoronto.ca>
Keywords: meaning, understanding
Organization: UTCS Public Access
References: <1992Mar6.012217.25722@news.media.mit.edu> <1992Mar6.214616.18384@psych.toronto.edu> <1992Mar10.204754.1137@gpu.utcs.utoronto.ca> <1992Mar11.164816.18444@psych.toronto.edu>
Date: Thu, 12 Mar 1992 16:20:59 GMT

In article <1992Mar11.164816.18444@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar10.204754.1137@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>>
>>Problem, which I have tried to point out in the past, is in the content of
>>the database for the Chinese squiggles. English word `hamburger` correlates in
>>the English person's mind for instance with a mental picture of hamburger - the
>>person had seen a hamburger in the past and knew this object was 'a hamburger'.
>>If the database for Chinese squiggles had a picture of hamburger correlated 
>>with the corresponding squiggle (and the same for other squiggles), would you
>>still maintain that the person would not understand what he/she is doing?
>
>Yes, I would still maintain it, if the picture were encoded, say, as pictures
>are on a laser disc (a perfectly reasonable coding scheme for a computer to
>use).  Can *you* understand .GIF files by "looking" at them?
>
Aren't you getting confused? If this is the way the info for Chinese squiggles
is coded, then there is no connection between the man's mind and the CR system
whether part of his brain is used to house the database, the rule book and 
'scraps of paper' or not. The man is just a part of the system and demanding
that he understands is just as sensible as demanding that any single part 
of the brain understands. 

>However, I don't think that this response addresses the problem, which is
>*still* that syntax can't yield semantics.  Adding some kind of enriched
>senory signal will not solve this, as Searle argued in his response to the
>"Robot Reply". 
> 
I find this all talk as long as you don't tell me what this 'semantics' is 
that it cannot arise from processing sensory input. 
>
>>If you insist that the person has to `understand` what the squiggles represent,
>>you have to provide him/her with the same info about the squiggles as he/she
>>has about English words.
>
>I do not claim to have a complete answer as to how people attach meaning to
>symbols.  *No one* claims to have solved this problem in a satisfying manner.
>However, what we seem to have is a pretty good idea of how it *can't*
>be solved, and one of these ways is by invoking pure syntax.  
>
You mean you are sure that it (semantics) cannot be produced by processing
sensory input? If it can, isn't it then in the final count reducible to 'pure
syntax'? If it cannot, then what do you suggest?

>- michael
> 
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


