From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo Mon Mar  9 18:34:52 EST 1992
Article 4229 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <44140@dime.cs.umass.edu> <1992Mar2.172515.15389@psych.toronto.edu> <44256@dime.cs.umass.edu>
Message-ID: <1992Mar3.203249.23251@psych.toronto.edu>
Date: Tue, 3 Mar 1992 20:32:49 GMT

In article <44256@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>In article <1992Mar2.172515.15389@psych.toronto.edu> 
>	christo@psych.toronto.edu (Christopher Green) writes:
>
>>The man in the Room (consciously) memorizes all of the rules and the
>>shapes of all the symbols.  Then he (consciously) implements those
>>rules in attempting to construct Chinese answers to the Chinese questions
>>he receives.  In doing this, he satisfies the requirements of being
>>                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>a Turing machine. 
>>^^^^^^^^^^^^^^^^
>
>This could be misleading, in that he is a Turing machine plus:  plus he
>can speak English, and has all the normal mentality of a human.  He
>is not solely a Turing machine for the CR program.  

For the PURPOSES OF THE CR PROGRAM, he is indeed solely a Turing machine.
For the purpose of the question-asking about understanding afterwards,
he is more. None of his CR activities violate any Turing Machine constraints.

>Not that you said so, 
>but I don't want the man turned into a Turing machine:  he is a man
>with part of his brain acting like a Turing machine.

This "part of his brain" talk is what is misleading.

>>Because his answers are indistinguishable from those
>>that would be given by a native Chinese speaker, he also passes the Turing
>>test. We are now, under the TT, expected to say that he understands
>>Chinese.  If you ask him, however, he says he doesn't understand
>>          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>Chinese but, rather, that he's just executing these rules about
>>^^^^^^^
>>symbol manipulation. 
>
>This point is not clear to me.  How do you know what he would say if
>you asked him?  

Simple. None of the Chinese characters he manipulates have any reference
for him. He doesn't even know which are nouns and which are verbs. Without
these (necessary but not sufficient) requirements, he can't possibly know
the meanings of either the questions or the his replies. Thus, he does
not understand.

>Suppose the CR program's I/O was via the man's normal
>sensory organs, rather than slips of paper.  

How do you think he examines the slips of paper? Vision, of course.

>This doesn't seem to
>change the essence of the thought experiment, but shifts the intuition
>for me at least.  For then when you speak to the man in Chinese, he
>answers reasonably in Chinese.  If you ask him in Chinese if he knows
>what a hamburger is, he will explain exactly what it is.  When I
>speak in English, the process is similar:  I hear a question, something
>mysterious goes on in my head, and I speak the answer.  And I have the 
>internal experience of "understanding."  It seems to me quite possible that
>when the man is answering a question with his CR-program, what he
>may feel inside is "understanding."  I am not claiming this must
>happen, but I don't see why you can be so sure he will say he doesn't
>understand.

I would say you've been overly-influenced by behaviorism.

>>To put the point bluntly, the Chinese symbols
>>                          ^^^^^^^^^^^^^^^^^^^
>>have no reference for him, though his English symbols do. 
>>^^^^^^^^^^^^^^^^^^^^^^^^^
>
>You say they have no reference, but when he is conversing in Chinese,
>he can explain exactly what a hamburger is.  In what sense do they
>have no reference?  It seems they must have a reference in the CR-program,
>otherwise the program would not be able to pass the TT under close
>interrogation.  

You have completely conflated syntax and semantics here. As I have said
(and said and said) before. It may turn out to be the case that semantics
is reducible to syntax, but no one has shown it to be true. Moreover,
the kind of semantic theory you imply -- semantic holism -- is under
strenuous attack from a number of quarters, including from within
the computationalist community itself. Take a look a Fodor's
_Psychosemantics_.

>>Everything
>>so far has been conscious and above board. At this point, Hofstadter
>>and Dennett, or at least the charicatures of them that have been
>>inhabiting this discussion of late, want to claim that he understands
>                                             ^^^^^^^^^^^^^^^^^^^^^^^^^
>>Chinese, only unconsciously? Why suddenly unconscious? 
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
>I cannot find this claim directly in Hofstadter's reply.  Rather he says 
>that from the "system's-eye view" there is understanding.  

Perhaps H doesn't use this term, but certainly his defenders here have
been. One went so far as to claim that the man now has Multiple-personality
disorder. In any case what would you call it when an other wise healthy
human can speak to you in perfect Chinese, and then, in English, tell
you in all honesty that he doesn't understand a word of it. If he's got
understanding of Chinese (and I claim that he does not) then he must
have it below the level of his conscious awareness, i.e., unconsciously.

>	In summary, there seem to be two ways to counter your
>argument:
>
>		1. The man really does understand Chinese, at
>		   least during the time that "he" is speaking
>		   Chinese.

This is obscurantism as well. It does massive violation to the notion of
understanding being used. If you want to redefine terms, by all means do,
but don't pretend that you're using them in their usual sense.
>
>		2. The system really does understand Chinese, but
>		   there is no communication between the system
>		   and the memorizer.  So yes, there are two minds
>		   in the one head.
>
Still, I see no reason to accept this conclusion, execpt as a stop-gap
to save the Turing Test. It is a wild speculation that buys us nothing.



-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


