From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!wupost!micro-heart-of-gold.mit.edu!uw-beaver!pauld Wed Feb 26 12:54:19 EST 1992
Article 3992 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!wupost!micro-heart-of-gold.mit.edu!uw-beaver!pauld
>From: pauld@cs.washington.edu (Paul Barton-Davis)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb25.011840.24663@beaver.cs.washington.edu>
Date: 25 Feb 92 01:18:40 GMT
References: <1992Feb21.012616.9016@husc3.harvard.edu> <43846@dime.cs.umass.edu> <1992Feb24.223405.28054@psych.toronto.edu>
Sender: news@beaver.cs.washington.edu (USENET News System)
Organization: Computer Science & Engineering, U. of Washington, Seattle
Lines: 52

In article <1992Feb24.223405.28054@psych.toronto.edu> christo@psych.toronto.edu (Christopher Green) writes:
>
>Precede everything I say here with "If your interpretation is right...."
>
>In article <43846@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>>	His more substantive critique hinged on Searle asking
>>the memorizer if he understood Chinese.  In his SciAm article
>>(which I do have in front of me), Searle says, "There is nothing
>>in the 'system' that is not in me, and since I don't understand
>>Chinese," etc.  Hofstadter likened asking the daemon executing 
>>the program whether it understands Chinese, to asking the neurons
>>if they understand Chinese.  
>
>But it's exactly the opposite. Once the man has memorized the rules and
>symbols, the system is part of him, not the other way around.  Asking
>neurons is asking a part if it has properties of the whole. Asking the
>man if he understands is asking the whole if it has (in the limited sense
>of "contains") the properties of its parts. Before you jump on me for
>fallacy of composition (or whatever it's called) consider that if 
>Hofstadter won't accept the answer of the daemon under these conditions,
>then his thought experiment is unrefutable; i.e., there is no conceivable 
>evidence that would force him to back down. Thus, his question becomes
>analytic, due to his definitions, and his answer, though right, no longer
>bears on the clearly empirical question of whether or not the man/system/
>daemon understands Chinese.

This shows that you don't fully grasp the Systems Reply. When you
address the question "do you understand chinese" to the man who has
learnt the rules, what are you addressing ? You claim that the system
is a part of him, but in what way ? If you want to claim that the
apparently conscious entity we normally think we are dealing with can
memorize the rules, the Systems Reply includes a component which says
"nonsense - impossible". Some other type of system, specifically a
lower level, much more rapid system must be responsible for learning
the rules if they are to be internalized in this way. The man's brain
*might* be able to internalize the rules, and provide access in a form
that the man will be able to report the outcome of their application.
However, it is preposterous to think that the "virtual machine"
running "on top" of the brain's hardware is capable of performing this
feat. Thus, your interactions are via a front-end, hopefully with a
back-end that has memorized the rules of how to manipulate Chinese. I
put it to you that this is identical to a native speaker, and thus the
man will naturally and with no effort find himself answering "yes" to
the question (should he choose to do so, since he has some option of
allowing the backend to see the question as chinese or not).

-- paul


-- 
Computer Science Laboratory	  "truth is out of style" - MC 900ft Jesus
University of Washington 		<pauld@cs.washington.edu>


