From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!pindor Tue Mar 24 09:54:35 EST 1992
Article 4370 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of understanding
Message-ID: <1992Mar10.150226.14196@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Mar5.141610.20612@oracorp.com> <1992Mar5.201538.1251@psych.toronto.edu> <1992Mar6.145636.13539@gpu.utcs.utoronto.ca> <1992Mar6.220606.22225@psych.toronto.edu>
Date: Tue, 10 Mar 1992 15:02:26 GMT
Lines: 41

In article <1992Mar6.220606.22225@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar6.145636.13539@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>In article <1992Mar5.201538.1251@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>>However, I *do* think that this example does show how much we rely on
>>>interpretation of computer behaviour in assigning meaning to it.  Many
>>
>>Since you are in psychology department, you should know only too well how much
>>we rely on interpretation of other people's behaviour in assigning meaning to
>>what they say.
>>How do you establish a meaning of what someone says? It is YOUR interpretation
>>of what this person means, or is there a better way?
>
>This is *not* what is being discussed, but rather, how *you* assign meaning
>to what *you* say.  This is certainly *not* a matter of other people's
>interpretations.  You *know* what you mean.  You may be wrong (according)
>to other people), but you *know* what you mean.  To deny this is to be
>ideologically anaesthetized. 
>
I've thought that we are discussing 'understanding' in a context of AI! If so,
how I assign meaning to what *I* say is irrelevant. Even how you assign meaning
to what *you* say is irrelevant. What is relevant is how we assign meaning to
what other (then us) entity 'says' or does. In particular, if this other entity
is a machine, trying to judge machine's performance by the standards of our own
subjective 'feelings' is unreasonable. You may say 'but that is relevant to AI
claims'. In a sense yes, this may be an ultimate claim of strong AI, but 
solid judgements whether these claims have substance or not have to wait till
we have some objective grasp on these 'subjective feelings'. Minsky's
analysis of the use of word 'understanding' is very much to the point. It
concentrates on what is tangible, whereas referring to 'subjective feeling
of understanding' can only lead to more shouting matches.
>
>- michael
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


