From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!bu2.bu.edu!bu.edu!m2c!nic.umass.edu!dime!orourke Mon Mar  9 18:34:48 EST 1992
Article 4222 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!bu2.bu.edu!bu.edu!m2c!nic.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <44256@dime.cs.umass.edu>
Date: 3 Mar 92 16:43:46 GMT
References: <1992Feb28.211025.26278@oracorp.com> <1992Feb29.162020.9271@psych.toronto.edu> <44140@dime.cs.umass.edu> <1992Mar2.172515.15389@psych.toronto.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 98

In article <1992Mar2.172515.15389@psych.toronto.edu> 
	christo@psych.toronto.edu (Christopher Green) writes:

[A detailed verion of the memorizing counter to the Systems Reply]

Thanks for the expanded version of your argument contra the Systems
Reply.  I must admit it is more compelling now.  But let me point
out what I see as weak links.

>The man in the Room (consciously) memorizes all of the rules and the
>shapes of all the symbols.  Then he (consciously) implements those
>rules in attempting to construct Chinese answers to the Chinese questions
>he receives.  In doing this, he satisfies the requirements of being
>                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>a Turing machine. 
>^^^^^^^^^^^^^^^^

This could be misleading, in that he is a Turing machine plus:  plus he
can speak English, and has all the normal mentality of a human.  He
is not solely a Turing machine for the CR program.  Not that you said so, 
but I don't want the man turned into a Turing machine:  he is a man
with part of his brain acting like a Turing machine.

>Because his answers are indistinguishable from those
>that would be given by a native Chinese speaker, he also passes the Turing
>test. We are now, under the TT, expected to say that he understands
>Chinese.  If you ask him, however, he says he doesn't understand
>          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>Chinese but, rather, that he's just executing these rules about
>^^^^^^^
>symbol manipulation. 

This point is not clear to me.  How do you know what he would say if
you asked him?  Suppose the CR program's I/O was via the man's normal
sensory organs, rather than slips of paper.  This doesn't seem to
change the essence of the thought experiment, but shifts the intuition
for me at least.  For then when you speak to the man in Chinese, he
answers reasonably in Chinese.  If you ask him in Chinese if he knows
what a hamburger is, he will explain exactly what it is.  When I
speak in English, the process is similar:  I hear a question, something
mysterious goes on in my head, and I speak the answer.  And I have the 
internal experience of "understanding."  It seems to me quite possible that
when the man is answering a question with his CR-program, what he
may feel inside is "understanding."  I am not claiming this must
happen, but I don't see why you can be so sure he will say he doesn't
understand.

>To put the point bluntly, the Chinese symbols
>                          ^^^^^^^^^^^^^^^^^^^
>have no reference for him, though his English symbols do. 
>^^^^^^^^^^^^^^^^^^^^^^^^^

You say they have no reference, but when he is conversing in Chinese,
he can explain exactly what a hamburger is.  In what sense do they
have no reference?  It seems they must have a reference in the CR-program,
otherwise the program would not be able to pass the TT under close
interrogation.  What you mean is they have no reference to "him,"
him the English-speaker.  First, as I discussed above, this is
not clear to me.  Second, it seems possible that the symbols have
no reference to him the English-speaker, but they do have reference
to him the Chinese-speaker.  It is true this would be exceedingly
odd, but the entire scenario is so odd that I would not want
to prejudge the psychology.

>Everything
>so far has been conscious and above board. At this point, Hofstadter
>and Dennett, or at least the charicatures of them that have been
>inhabiting this discussion of late, want to claim that he understands
                                             ^^^^^^^^^^^^^^^^^^^^^^^^^
>Chinese, only unconsciously? Why suddenly unconscious? 
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^

I cannot find this claim directly in Hofstadter's reply.  Rather he says 
that from the "system's-eye view" there is understanding.  I don't
see that he talks of consciousness, but the implication of his
position is that there are two points of view, and that from the
system's viewpoint, there is consciousness.  He doesn't seem
to say whether this would be unconscious from the memorizer's 
viewpoint.

	In summary, there seem to be two ways to counter your
argument:

		1. The man really does understand Chinese, at
		   least during the time that "he" is speaking
		   Chinese.

		2. The system really does understand Chinese, but
		   there is no communication between the system
		   and the memorizer.  So yes, there are two minds
		   in the one head.

Position #1 is tenable if you believe that consciousness is
"nothing but" a complex of internal dispositional states, consciousness
is how if "feels" when you have those states.  Position #2 is
tenable if you believe the memorizer scenario is so unnatural
that our intuitions about the unity of consciousness no longer
apply.


