From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:59 EST 1992
Article 4332 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar5.001144.28065@beaver.cs.washington.edu> <1992Mar5.203720.4209@psych.toronto.edu> <1992Mar6.172308.15113@beaver.cs.washington.edu>
Message-ID: <1992Mar6.223154.26703@psych.toronto.edu>
Date: Fri, 6 Mar 1992 22:31:54 GMT

In article <1992Mar6.172308.15113@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>In article <1992Mar5.203720.4209@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>Searle's contribution is in how such a thing can *not* exist, namely, by
>>the purely syntactic manipulation of symbols.
>
>I can't disagree with that. However, Searle went a little further with
>the Chinese room by allowing the questioner to ask questions of the
>form "Do you ... ?" I don't know of anyone who has claimed that purely
>syntactic manipulation of a given set of symbols could ever provide an
>answer to questions of this form.

Anyone who has ever claimed that Strong AI could succeed has implicitly
made this claim, since Strong AI claims that minds can be produced through
the manipulation of marks in a purely syntactic fashion.

> What does syntax say you should do
>with queries where the referrent is not external ?

Good question.    


>>If you want to postulate a specialize mechanism that "clearly has to have
>>some semantic abilities," then go ahead, as long as you explain *where*
>>the semantics comes from...
>
>I'd instead suggest that the debate over syntax vs. semantics is
>useless.  The categories it establishes are too rigid. I'd rather just
>say: manipulating symbols according to some (possibly stochastic)
>algorithm doesn't constitute understanding, but representing such
>manipulations with other symbol manipulations can.

How?  And how is "representing such manipulations WITH OTHER SYMBOL
MANIPULATIONS" *not* a regress?


> The man in the CR
>doesn't represent his own shuffling; external observers as well as the
>mythical "system's eye view" do.
>
>If Searle's "memorizer" man was to represent his own shuffling the way
>that your or I represent our own brain activity, I strongly believe he
>would answer, with conviction, "yes" when asked if he understood
>Chinese. If you could see "understand" as a term that you apply as a
>description rather than as an experience, whether or not the object is
>"you" or someone else, this might be clearer.

I refuse to make this last suggested move, as it seems to deny the 
possibility that others could be *wrong* about my understanding.  There
certainly *is* something special about the access I have to *my*    
understanding, something that I don't have when examining others understanding.
To argue against this is to retreat into behaviourism.  

- michael



