From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:09 EST 1992
Article 4256 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Definition of understanding
Organization: Department of Psychology, University of Toronto
References: <34431@uflorida.cis.ufl.EDU> <1992Feb29.090346.13556@ccu.umanitoba.ca> <34476@uflorida.cis.ufl.EDU>
Message-ID: <1992Mar4.204244.25110@psych.toronto.edu>
Keywords: Searle Chinese Dead Horses
Date: Wed, 4 Mar 1992 20:42:44 GMT

In article <34476@uflorida.cis.ufl.EDU> fred@mole.cis.ufl.edu (Fred Buhl) writes:
>>You are wrong about one thing above. The Chinese room is capable of
>>learning. What do you suppose happens to all those slips of paper that
>>are passed into the CR? They are of course saved, and referenced for
>>later use by the system. (just what do you think would happen if you
>>asked the CR to repeat the previous 10 questions?)  
>>Learning - voila!
>
>There's no explicit statement in Searle's BBS article about saving the
>strips.  If the CR could repeat the previous ten questions asked it
>(and I'm not sure if that would be a question Searle would allow,
>although it's clearly a Turing-test-like question) saving the strips
>would be required.  I guess you can call that learning, if you like,
>but if so, then a word-processor can learn since it can retrieve text
>that was typed into it before.  When I think of "learning" I think of
>some change in the behavior of an agent as a result of experience.
>Your "previous ten questions" example would not have a lasting
>influence on the CR's behavior.  

Learning does not change *anything* as far as the man in the room is
concerned.  Learning may make the CR seem more "real", but it has
no effect on Searle's argument.  In general, to claim that the CR
argument fails because of some sort of computational inadequacy
on the part of the specific method Searle suggests is simply to
miss the point.  The argument is unchanged no matter *what* kind
of algorithm you run in the CR. 

>This brings up an interesting aspect of the CR.  Searle doesn't ever
>propose questions that require following a series of statements (i.e.
>discourse); he just supposes it's being asked single questions.
>That's a major simplification of the Turing test, IMHO.  A truly
>humoungous table-lookup could handle it, I suppose, but it'd be REALLY
>humoungous.

Again, this does not change the argument.  Make it humoungous.  Make it
a neural net.  It doesn't change the point.

>I still claim:  The CR can't understand since it has no way of attaching
>meaning to its symbols;  this is since it doesn't have any other I/O
>channels to attach meaning with, and no learning ability to perform the
>attachment.  If it could _learn_ Chinese starting from zero, I'd be more
>impressed.

The "Robot Reply" is the attempt to attach additional I/O channels,
and Searle argues that this changes nothing, since the input *for the
man in the room* is still just squiggles & squoggles.  I'd suggest
that you go back and peruse the original BBS article.


- michael



