From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo Wed Feb 26 12:54:11 EST 1992
Article 3979 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: Re: Definition of understanding
Message-ID: <1992Feb24.223405.28054@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <43686@dime.cs.umass.edu> <1992Feb21.012616.9016@husc3.harvard.edu> <43846@dime.cs.umass.edu>
Date: Mon, 24 Feb 1992 22:34:05 GMT


Precede everything I say here with "If your interpretation is right...."

In article <43846@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>And in article <1992Feb22.234252.17095@psych.toronto.edu>
>	christo@psych.toronto.edu (Christopher Green) writes:
>>Please tell me what you find "good" about Hofstadter & Dennett's reply.
>>I have it here in front of me and it seems to boil down to "no human
>>could ever memorize all those symbols and rules." 
>
>His argument was not
>simply that no human could memorize all the rules.  He pointed
>that out so that the reader would be aware that when Searle
>leans on their intuition by saying "obviously the system doesn't
>understand," it is not so obvious, because the situation is so
>unrealistic that intuition is a poor guide.

I believe this is to misunderstand why philosophy uses non-actual examples.
If you don't like it with all the rules and symbols necessary to capture
Chinese, try it with a limited artificial language that requires only, say,
five symbols and three recombination rules.  You still don't get understanding
(i.e., you don't know how the string should be interpreted), you only get
well-formed formulae of some uniterpreted formal system.

>	His more substantive critique hinged on Searle asking
>the memorizer if he understood Chinese.  In his SciAm article
>(which I do have in front of me), Searle says, "There is nothing
>in the 'system' that is not in me, and since I don't understand
>Chinese," etc.  Hofstadter likened asking the daemon executing 
>the program whether it understands Chinese, to asking the neurons
>if they understand Chinese.  

But it's exactly the opposite. Once the man has memorized the rules and
symbols, the system is part of him, not the other way around.  Asking
neurons is asking a part if it has properties of the whole. Asking the
man if he understands is asking the whole if it has (in the limited sense
of "contains") the properties of its parts. Before you jump on me for
fallacy of composition (or whatever it's called) consider that if 
Hofstadter won't accept the answer of the daemon under these conditions,
then his thought experiment is unrefutable; i.e., there is no conceivable 
evidence that would force him to back down. Thus, his question becomes
analytic, due to his definitions, and his answer, though right, no longer
bears on the clearly empirical question of whether or not the man/system/
daemon understands Chinese.

>Asking the daemon is not asking
>the system.  This is clearer if one imagines the program being
>executed by a billion tiny daemons.
 
I don't see how.

>	Searle identifies the system with the memorizer, and so
>concludes that the system doesn't understand because the memorizer
>doesn't understand.  Hofstadter says the system does understand,
>even though the memorizer does not.

This is precisely the analyticizing of the question I mentioned above.
Hofstadter has made the question impossible to fruitfully ask. If the answer
is "yes, I understand" he claims victory. If it is "no, I don't" he claims
that the wrong entity has been questioned. Heads I win, tails you lose.


-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


