From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:29 EST 1992
Article 4288 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar3.220206.6241@beaver.cs.washington.edu> <1992Mar4.210627.28060@psych.toronto.edu> <1992Mar5.001144.28065@beaver.cs.washington.edu>
Message-ID: <1992Mar5.203720.4209@psych.toronto.edu>
Date: Thu, 5 Mar 1992 20:37:20 GMT

In article <1992Mar5.001144.28065@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>
>
>In article <1992Mar4.210627.28060@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Mar3.220206.6241@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:

>>>Chris, why must you always attack strawmen ? Why bother with easy
>>>questions ("does a system that shuffles symbols understand the
>>>symbols") when more interesting and difficult ones are around ("does a
>>>system that models its own symbol shuffling understand the symbols") ?
>>
>>And how, pray tell, does the system "model its own symbol shuffling"
>>without simply shuffling symbols?  How does this avoid being a regress?
>
>It doesn't do anything other than shuffle symbols. The question in the
>CR is not whether or not all shuffled symbols are understood, but
>specifically, are the chinese symbols understood ?  The regress is
>avoided because I am specifically noting that the symbols used to
>provide understanding are not understood.

The statement that somehow including additional symbols will yield understanding
is simply an assertion.  Unless you have an explanation of how you get
understanding out of purely syntatic rules, it will remain simply an assertion.

>This a subtle variant on the systems reply, because it says that as
>is, the CR does *not* understand chinese (in the subjective sense that
>Searle meant), but that the same mechanisms that enable it to shuffle
>chinese symbols will also enable it to gain subjective understanding.

This sentence makes absolutely no sense to me.  The "CR does *not*
understand Chinese" and yet somehow "gain[s] subjective understanding"?!

>The smoke and mirrors are only present, IMHO, in the use of the word
>"subjective" and Searle and just about everybody else hasn't
>contributed anything to the question of how such a thing can exist.

Searle's contribution is in how such a thing can *not* exist, namely, by
the purely syntactic manipulation of symbols.

>I note also that the idea of asking the CR "do you understand
>chinese ?"  goes to part of the heart of the implausability of Searle's
>gedankenexperiment. The idea that the response to this question could
>be derived by simply shuffling the *same* set of symbols as were
>shuffled when asked "what is 3 times 3 ?" is absurd. Asking about
>internal states implies tapping into a different level of
>representation that the chinese symbols exist upon. The only way out
>of this is to require a handler for all questions who syntactic form
>imply a "do you ..." question.  Since any handler for this that could
>generate meaningful answers to the range of possible questions would
>probably count as an instantiantion of personhood, and since Searle
>did not describe such a handler (which clearly has to have some
>semantic abilities), to ask such questions of the CR is
>absurd. 

Once more, "specific implementations and architectures do not make a
principled difference."  Have the bloody computer answer the questions using
any algorithm you like, it makes no difference to the form of the argument.
  
If you want to postulate a specialize mechanism that "clearly has to have
some semantic abilities," then go ahead, as long as you explain *where*
the semantics comes from...

- michael





