From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:54:38 EST 1992
Article 4375 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar6.145636.13539@gpu.utcs.utoronto.ca> <1992Mar6.220606.22225@psych.toronto.edu> <1992Mar10.150226.14196@gpu.utcs.utoronto.ca>
Message-ID: <1992Mar10.171111.6954@psych.toronto.edu>
Date: Tue, 10 Mar 1992 17:11:11 GMT

In article <1992Mar10.150226.14196@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Mar6.220606.22225@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Mar6.145636.13539@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>>In article <1992Mar5.201538.1251@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>
>>>>However, I *do* think that this example does show how much we rely on
>>>>interpretation of computer behaviour in assigning meaning to it.  Many
>>>
>>>Since you are in psychology department, you should know only too well how much
>>>we rely on interpretation of other people's behaviour in assigning meaning to
>>>what they say.
>>>How do you establish a meaning of what someone says? It is YOUR interpretation
>>>of what this person means, or is there a better way?
>>
>>This is *not* what is being discussed, but rather, how *you* assign meaning
>>to what *you* say.  This is certainly *not* a matter of other people's
>>interpretations.  You *know* what you mean.  You may be wrong (according)
>>to other people), but you *know* what you mean.  To deny this is to be
>>ideologically anaesthetized. 
>>
>I've thought that we are discussing 'understanding' in a context of AI! If so,
>how I assign meaning to what *I* say is irrelevant. Even how you assign meaning
>to what *you* say is irrelevant. What is relevant is how we assign meaning to
>what other (then us) entity 'says' or does. In particular, if this other entity
>is a machine, trying to judge machine's performance by the standards of our own
>subjective 'feelings' is unreasonable. You may say 'but that is relevant to AI
>claims'. In a sense yes, this may be an ultimate claim of strong AI, but 
>solid judgements whether these claims have substance or not have to wait till
>we have some objective grasp on these 'subjective feelings'. Minsky's
>analysis of the use of word 'understanding' is very much to the point. It
>concentrates on what is tangible, whereas referring to 'subjective feeling
>of understanding' can only lead to more shouting matches.

Again, I think you miss the point of Searle's argument.  He is, essentially,
taking Turing up on his suggestion in "Computing Machinery and Intelligence"
that the only way to determine if a computer is conscious is to *be* the
machine.  This is *exactly* what the Chinese Room example attempts to do.
The question that the Chinese Room example asks is "If *you* were the
man inside, would *you* understand?".  No reference to how we determine
if *others* understand is needed.  This point is explicitly made in
Searle's original paper.  This is not an "Other Minds" problem.  Thus, Searle
tries to avoid the very real problems of defining "understanding".  All
you have to do is determine if *you* would have it in the CR situation.

- michael
 



