From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!uw-beaver!pauld Tue Mar 24 09:54:29 EST 1992
Article 4361 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!uw-beaver!pauld
>From: pauld@cs.washington.edu (Paul Barton-Davis)
Subject: Re: Definition of understanding
Message-ID: <1992Mar9.185702.22812@beaver.cs.washington.edu>
Sender: news@beaver.cs.washington.edu (USENET News System)
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: <1992Mar6.223154.26703@psych.toronto.edu> <1992Mar7.010644.1466@beaver.cs.washington.edu> <1992Mar9.162941.1959@psych.toronto.edu>
Date: Mon, 9 Mar 92 18:57:02 GMT
Lines: 88

In article <1992Mar9.162941.1959@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar7.010644.1466@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>>In article <1992Mar6.223154.26703@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>>> What does syntax say you should do
>>>>with queries where the referrent is not external ?
>>>
>>>Good question.    
>>
>>Glad you agree. Here's one (quickly cooked up) answer: there is
>>nothing internal. To the mechanisms that generate speech, everything,
>>including a mind, is external.
>
>Huh?  

I'm disappointed by this response :-)

Lets suppose, in keeping with Pandemonium-like models for speech
generation, that the *mechanisms* involved don't involve anything
close to what we mean by "conciousness". These mechanisms generate
noise (or rather, would do so if they made it through to the vocal
tract) that ultimately ends up having meaning, but not about anything
that is internal to the speech generating mechanisms (even if the
sounds include things like "I" and "my feelings"). Instead the
meanings are about abstractions that exist on some other level.

In keeping with such a pandemonium-like model, there has to be some
filtering going on, but there's no reason why this can't operate a
level some distance above that of the actual speech generation. It
could, perhaps, operate on a level where there were representations of
things like "my feelings" and "I" already in existence. At such a
level, speech that is said to refer to these abstractions would appear
to have real content, even though to the levels that produced it, it
would be meaningless.

In the context of the Chinese Room, this again reiterates the Systems
Reply in that you won't find anything that "understands" at the level
of symbol manipulation - you need to step up a level (or several).
When you address questions of the form "Do you ..." to the room, the
mechanisms might all be in place to form a reply (shuffling bits of
paper, or remembering a cell of the lookup table or some rule), but
the abscence of any higher level filter that abstracts a "self"
prevents them from ever having any practical effect. 

That is to say, if you ask your speech centers if they understand
English, they won't/can't say anything because they have no model of
self. If you ask a higher level ("you"), at which such an abstraction
exists, you might get some reasonable answers.

[ Note also that Searle's use of language manipulation for his example
  shows up another aspect of the dirty intellectual trick he plays.
  Searle wants us to believe that his model could instantiate a Chinese
  speaker. My reading of most language studies suggest that his model
  has almost no similarity to the way that we actually generate speech,
  which wouldn't be so bad if one could believe that an alternative
 (rules, bits of paper, lookup tables, whatever) would work just as well.
  However, as most computer-generated speech researchers would acknowledge,
  this is simply not true.

  The most likely conclusion that someone interacting with the room would
  have, IMHO, is "Oh, its like there's a guy in there shuffling symbols."

  They wouldn't bother to ask if the guy understood the language - he 
  evidently does not. 
]

>Yes, I *do* believe that I have special access to my understanding, or
>at least to my *beliefs* about my understanding.  I *know* when I believe
>I understand Chinese.  I may be wrong that I in fact *do* understand it,
>but, unlike any other person, I cannot be wrong about my *belief* that
>I understand it. I *do* stand in a privileged position with regard to
>my mental states.  (Otherwise, to use a favorite example, we'd need a doctor
>to tell us whether we were in pain or not.)
>
>If you wish to deny an individual privileged access to their mental states,
>fine, but it's going to take a *lot* of argument. 

Why would I dispute this ? What I'm questioning is your belief (:-)
that you know what these mental states *are*. You claim that they
cannot arise out of mere symbol shuffling (implemented by silicon or
carbon or cardboard or rubber). I dispute that you have any knowledge
about the origin of your mental states, at least if you only
introspect on them.

-- paul
-- 
Computer Science Laboratory	  "truth is out of style" - MC 900ft Jesus
University of Washington 		<pauld@cs.washington.edu>


