From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:54:55 EST 1992
Article 4396 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <1992Mar7.010644.1466@beaver.cs.washington.edu> <1992Mar9.162941.1959@psych.toronto.edu> <1992Mar9.185702.22812@beaver.cs.washington.edu>
Message-ID: <1992Mar11.182542.5325@psych.toronto.edu>
Date: Wed, 11 Mar 1992 18:25:42 GMT

In article <1992Mar9.185702.22812@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>In article <1992Mar9.162941.1959@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Mar7.010644.1466@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>>>In article <1992Mar6.223154.26703@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>
>>>>> What does syntax say you should do
>>>>>with queries where the referrent is not external ?
>>>>
>>>>Good question.    
>>>
>>>Glad you agree. Here's one (quickly cooked up) answer: there is
>>>nothing internal. To the mechanisms that generate speech, everything,
>>>including a mind, is external.
>>
>>Huh?  
>
>I'm disappointed by this response :-)

I'm not surprised.  I was more than a little confused.  But I believe I
now how something of a handle on what you mean.

>Lets suppose, in keeping with Pandemonium-like models for speech
>generation, that the *mechanisms* involved don't involve anything
>close to what we mean by "conciousness". These mechanisms generate
>noise (or rather, would do so if they made it through to the vocal
>tract) that ultimately ends up having meaning, but not about anything
>that is internal to the speech generating mechanisms (even if the
>sounds include things like "I" and "my feelings"). Instead the
>meanings are about abstractions that exist on some other level.
>
>In keeping with such a pandemonium-like model, there has to be some
>filtering going on, but there's no reason why this can't operate a
>level some distance above that of the actual speech generation. It
>could, perhaps, operate on a level where there were representations of
>things like "my feelings" and "I" already in existence. At such a
>level, speech that is said to refer to these abstractions would appear
>to have real content, even though to the levels that produced it, it
>would be meaningless.
>
>In the context of the Chinese Room, this again reiterates the Systems
>Reply in that you won't find anything that "understands" at the level
>of symbol manipulation - you need to step up a level (or several).
>When you address questions of the form "Do you ..." to the room, the
>mechanisms might all be in place to form a reply (shuffling bits of
>paper, or remembering a cell of the lookup table or some rule), but
>the abscence of any higher level filter that abstracts a "self"
>prevents them from ever having any practical effect. 

I'm beginning to lose the thread here, so please be patient.  Are you
implying that the CR situation is not possible (no response to
"Do you.." questions is possible) because there is no "self-modelling"?
If so, then I think that this is simply wrong, because the CR situation 
can be generalized to an algorithm that *does* do whatever you want
in the way of self-modelling.

>
>That is to say, if you ask your speech centers if they understand
>English, they won't/can't say anything because they have no model of
>self. If you ask a higher level ("you"), at which such an abstraction
>exists, you might get some reasonable answers.

But my question is *still* where this "abstraction" comes from.  How does
it get produced?

>[ Note also that Searle's use of language manipulation for his example
>  shows up another aspect of the dirty intellectual trick he plays.
>  Searle wants us to believe that his model could instantiate a Chinese
>  speaker. My reading of most language studies suggest that his model
>  has almost no similarity to the way that we actually generate speech,
>  which wouldn't be so bad if one could believe that an alternative
> (rules, bits of paper, lookup tables, whatever) would work just as well.
>  However, as most computer-generated speech researchers would acknowledge,
>  this is simply not true.
>
>  The most likely conclusion that someone interacting with the room would
>  have, IMHO, is "Oh, its like there's a guy in there shuffling symbols."
>
>  They wouldn't bother to ask if the guy understood the language - he 
>  evidently does not. 
>]

As I have pointed out above, and to others before, the specifics of the
algorithm have no ramification on the CR example.  The person could be
carrying out *any* algorithm you choose.  According to Searle, the argument
still stands. 

>>Yes, I *do* believe that I have special access to my understanding, or
>>at least to my *beliefs* about my understanding.  I *know* when I believe
>>I understand Chinese.  I may be wrong that I in fact *do* understand it,
>>but, unlike any other person, I cannot be wrong about my *belief* that
>>I understand it. I *do* stand in a privileged position with regard to
>>my mental states.  (Otherwise, to use a favorite example, we'd need a doctor
>>to tell us whether we were in pain or not.)
>>
>>If you wish to deny an individual privileged access to their mental states,
>>fine, but it's going to take a *lot* of argument. 
>
>Why would I dispute this ? What I'm questioning is your belief (:-)
>that you know what these mental states *are*. You claim that they
>cannot arise out of mere symbol shuffling (implemented by silicon or
>carbon or cardboard or rubber). I dispute that you have any knowledge
>about the origin of your mental states, at least if you only
>introspect on them.

I certainly agree that I don't know how my mental states get produced (at least,
not the details).  But the point is that I *can* rule out, from a priori
reasoning, how my mental states *don't* get produced, namely, by purely
syntactic symbol shuffling.  *This* is the issue under contention.  If
you can offer a suggestion as to *how* semantics arises from syntax, then
I'd be happy to debate it.  But as far as I have seen here on the net, and
read from those in the area, no such successful explanation has been offered.

- michael





