From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!gatech!mailer.cc.fsu.edu!uflorida!mole.cis.ufl.edu!fred Mon Mar  9 18:34:48 EST 1992
Article 4221 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!gatech!mailer.cc.fsu.edu!uflorida!mole.cis.ufl.edu!fred
>From: fred@mole.cis.ufl.edu (Fred Buhl)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Keywords: Searle Chinese Dead Horses
Message-ID: <34476@uflorida.cis.ufl.EDU>
Date: 2 Mar 92 14:54:38 GMT
References: <1992Feb27.180811.4244@ccu.umanitoba.ca> <34431@uflorida.cis.ufl.EDU> <1992Feb29.090346.13556@ccu.umanitoba.ca>
Sender: news@uflorida.cis.ufl.EDU
Organization: Univ. of Florida CIS Dept.
Lines: 55

>Ah my freind, the question is not whether the Chinese room can
>understand frobnatz (without seeing one) its whether a BLIND MAN
>can understand frobnatz without seeing one? So you say that
>a blind man cannot understand things that he does not see.
>In my recollection, Hellen Kehler (sp?) did just fine understanding
>the world, while she had neither the sense of sight or hearing!
>Did you know she read, and typed Braille - you should read her
>autobiography sometime!

My friend, if you had read Helen Keller's autobiography (or seen the
movie about her called "The Miracle Worker" starring Patty Duke), you
know that she made _no_ progress toward understanding until her
teacher had the insight of taking her out to the water pump, pumping
water on her hand while finger-spelling the word "water".  In this
way, she was able to assign meaning to the finger-spelled word, using
the sense of touch as the other I/O channel.  (I'm glad you brought her
up;  she was the inspiration for my statement).

>You are wrong about one thing above. The Chinese room is capable of
>learning. What do you suppose happens to all those slips of paper that
>are passed into the CR? They are of course saved, and referenced for
>later use by the system. (just what do you think would happen if you
>asked the CR to repeat the previous 10 questions?)  
>Learning - voila!

There's no explicit statement in Searle's BBS article about saving the
strips.  If the CR could repeat the previous ten questions asked it
(and I'm not sure if that would be a question Searle would allow,
although it's clearly a Turing-test-like question) saving the strips
would be required.  I guess you can call that learning, if you like,
but if so, then a word-processor can learn since it can retrieve text
that was typed into it before.  When I think of "learning" I think of
some change in the behavior of an agent as a result of experience.
Your "previous ten questions" example would not have a lasting
influence on the CR's behavior.  

This brings up an interesting aspect of the CR.  Searle doesn't ever
propose questions that require following a series of statements (i.e.
discourse); he just supposes it's being asked single questions.
That's a major simplification of the Turing test, IMHO.  A truly
humoungous table-lookup could handle it, I suppose, but it'd be REALLY
humoungous.

I still claim:  The CR can't understand since it has no way of attaching
meaning to its symbols;  this is since it doesn't have any other I/O
channels to attach meaning with, and no learning ability to perform the
attachment.  If it could _learn_ Chinese starting from zero, I'd be more
impressed.

---------------------------------------------------------------------------
Fred Buhl, Grad Student        A proud member of the Union of
UF Computer Science Dept.      Unconcerned Scientists.       
fred@reef.cis.ufl.edu          "Ants are smart.  _Really_ smart." 
---------------------------------------------------------------------------
                    <<In Stereo Where Available>>


