From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:11 EST 1992
Article 4260 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar3.025214.26880@smsc.sony.com> <1992Mar3.201743.20894@psych.toronto.edu> <1992Mar3.220206.6241@beaver.cs.washington.edu>
Message-ID: <1992Mar4.210627.28060@psych.toronto.edu>
Date: Wed, 4 Mar 1992 21:06:27 GMT

In article <1992Mar3.220206.6241@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>In article <1992Mar3.201743.20894@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>>In article <1992Mar3.025214.26880@smsc.sony.com> markc@smsc.sony.com (Mark Corscadden) writes:
>>>
>>>Can you imagine memorizing a large look-up table of actions and then
>>>carrying out the actions called for by the table without ever having
>>>any personal understanding of the purpose behind the actions?  Even
>>>when virtually anyone in a position to watch your table-driven actions
>>>from, say, an outside perspective unavailable to you, would have no problem
>>>understanding their purpose?  I have no trouble imagining such a state
>>>of affairs.  
>>
>>Neither do I. And it's clear from your description that the person engaging
>>in the activities would not UNDERSTAND what they were doing, whereas a
>>native Chinese speaker does. This is the point. Whether or not others
>>can make sense of their behavior is irrelevant.
>
>Chris, why must you always attack strawmen ? Why bother with easy
>questions ("does a system that shuffles symbols understand the
>symbols") when more interesting and difficult ones are around ("does a
>system that models its own symbol shuffling understand the symbols") ?

And how, pray tell, does the system "model its own symbol shuffling"
without simply shuffling symbols?  How does this avoid being a regress?

(As I've noted before, I believe that "self-modeling" explanations are
merely smoke and mirrors.)

>
>If the person engaged in the above activity were to spend several
>years watching their own responses to the queries, what makes you so
>certain that they would not then understand at least some of the
>symbols ?

What does this have to do with the original problem?  He obviously
wouldn't understand *immediately*, while the memorized CR would 
*immediately* be spouting Chinese.  This demonstrates that the
man's understanding is *not* the same as the memorized CR's
(if it has any), which is the point under dispute. 

- michael




