From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!decwrl!mips!smsc.sony.com!markc Mon Mar  9 18:36:02 EST 1992
Article 4338 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!decwrl!mips!smsc.sony.com!markc
>From: markc@smsc.sony.com (Mark Corscadden)
Subject: Re: Definition of understanding
Message-ID: <1992Mar7.020514.24261@smsc.sony.com>
Keywords: Digital Iconoclasm
Organization: Sony Microsystems Corp, San Jose, CA
References: <1992Feb29.162020.9271@psych.toronto.edu> <1992Mar3.025214.26880@smsc.sony.com> <1992Mar3.095145.18304@leland.Stanford.EDU>
Date: Sat, 7 Mar 92 02:05:14 GMT

In article <1992Mar3.095145.18304@leland.Stanford.EDU> shibe@leland.Stanford.EDU (Eric Schaible) writes:
|>In article <1992Mar3.025214.26880@smsc.sony.com>, markc@smsc.sony.com (Mark Corscadden) writes:
|>|> 
|>|> This establishes that a person can have abilities of which they are
|>|> completely unaware, and that an excellent way to produce this state
|>|> of affairs is to have people blindly memorize look-up tables!  
|>
|>First of all, you have not established that a person can have abilities of
|>which they are completely unaware, you have merely told a story in which
|>you have more or less stipulated that this is the case.

Where did I stipulate that the person would be unaware of their ability
to play the hypothetical game I mentioned?  I did stipulate various
conditions, but the lack of awareness is a consequence of those conditions,
not a stipulation in and of itself.

|>However, suppose I argue as follows:  The attribution of understanding
|>requires a phenomenological experience of understanding.
|>If there is no experience of understanding, we want to say that there is
|>no understanding ...

I agree here.  I also do not want to use the word "understand" unless
there is an experience of understanding taking place.  I believe I'm
what's called a "realist" when it comes to the phenomenological experience
of understanding: I think that whether or not someone (or something)
experiences understanding is a question of fact.  Not that I claim to
be able to settle such questions in some cases :-)


|>If something (the system, say) 
|>is not experiencing the squiggles/squaggles as meaningful, we cannot say that 
|>the squiggles/squaggles mean anything to the system.

I agree here too.  In fact I personally do not believe that either the
external or the internalized Chinese-conversation-lookup table system
has an experience of understanding, and thus I do not believe that the
squiggles are meaningful to the man/table system.  Except for purposes
of avoiding misunderstanding, my beliefs concerning this are irrelevant
to this discussion.

|>Therefore:
|>If the internalized-CR subsystem understands Chinese, then it must be 
|>having a phenomenological experience of understanding Chinese.
|>Do you wish to claim that this is the case?

No.  What I claim, and demonstrate, is that the so-called rebuttal to
the systems reply is an invalid argument.  The hole isn't a tiny nit-pick
either; it's gaping.  My recently posted article, which refers to the
invented properties GAME and CHINESE, attempts to describe this hole
explicitly.  A hopelessly invalid argument can still happen to have a
valid conclusion.


|>If so, what you're saying is this:  the man, having internalized the system,
|>is consciously carrying out the rules of the system.  Moreover, he is
|>generating a phenomenological experience of understanding Chinese, although
|>he is not the one doing the experiencing.

Since you are making a statement here about what I am saying you are
flat-out wrong.  I never said that the man generates a phenomenological
experience of understanding Chinese.  I said that the supposed "rebuttal"
to the systems reply utterly fails to demonstrate that no experience of
understanding exists.


|>Now:  is this internalized system of rules the sort of thing which might have
|>phenomenological experiences?  My intuition is that it is not.

My intuition and yours agree.  My demonstration that the rebuttal to
the systems argument fails to support both our intuitions stands, however.
It's a flawed argument, regardless of whether we agree with the conclusion.

|>Now, if we
|>implement the Chinese Room program in the Dubuque computer, we must conclude
|>the following:
|>
|>The mailboxes in Dubuque are having a collective experience of understanding
|>Chinese.
|>
|>Do you agree?

Matthew P Wiener has pointed out, quite correctly, that no one has yet
given any objective demonstration that would justify our assuming that
the brain does not make essential use of macroscopic quantum mechanical
mechanisms in its operation, in relation to the the mind.  (Yes, I'll
answer your question, please bear with me.)  Personally I believe that
tiny localized physical systems (not necessarily restricted to neurons!)
and their interactions along local boundaries will eventually prove to be
sufficient to fully account for the operation of the brain, as it relates
to the mind.  However this is only a strong personal intuition, which is
very different from a proof or even a demonstration which would have
public value.  I believe it so I'm committed to the consequences, but
I don't claim to be able to give anyone any good reason to believe the
same way I do in this specific matter.

To answer your question, given the above I have no choice but to believe
that my *own* mind is an example of a system which has a collective
phenomenological experience which arises out of the interactions of tiny
physical systems which have no individual experiences.  This doesn't
trouble me.  I don't know about the mailboxes in Dubuque, though ;-)

Mark Corscadden
markc@smsc.sony.com
work: (408)944-4086


