From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!aunro!alberta!ubc-cs!uw-beaver!cornell!rochester!steyn Tue Nov 26 12:32:04 EST 1991
Article 1556 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10582 sci.philosophy.tech:1091 comp.ai.philosophy:1556
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!aunro!alberta!ubc-cs!uw-beaver!cornell!rochester!steyn
>From: steyn@cs.rochester.edu (Gavin Steyn)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <1991Nov25.023006.27696@cs.rochester.edu>
Date: 25 Nov 91 02:30:06 GMT
References: <MATT.91Nov24000158@physics.berkeley.edu> <94066@brunix.UUCP> <1991Nov24.201501.5845@husc3.harvard.edu>
Organization: Computer Science Department University of Rochester
Lines: 32

In article <1991Nov24.201501.5845@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>In article <94066@brunix.UUCP> 
>cgy@cs.brown.edu (Curtis Yarvin) writes:
>
>>Unless I am terribly confused about Searle's point in the "Chinese room"
>>argument, it stems from a simplistic confusion of software and hardware. 
>
>Not only are you terribly confused about Searle's point; not having
>bothered to read his article, you are terribly ignorant to argue about it.
>In "Minds, Brains, and Programs" Searle explicitly says: "let the
>individual internalise all of these elements of the system. [...]  All the
>same, he understands nothing of the Chinese, and *a fortiori* neither does
>the system, because there isn't anything in the system that isn't in him."
>(See the Boden anthology, p.73.)

Well, I've read Searle's article (a couple of times, actually, just to make
sure I understood his point ), so I guess I'm fit to comment...
  As I see it, having the guy internalize the system doesn't change the
the objection at all.  There *is* a part of the system not in the person, 
namely the understanding embodied by the rules.  (Admittedly, this assertion
of mine probably needs some defending, but so does Searle's assertion to the
contrary.  If he can define his problems away *a fortiori*, I can too...).

  Actually, to tell the truth, I fall somewhere into the camp who believes
Searle's whole argument is irrelevant--if I ever invented a system that acted
like it understood Chinese, I really wouldn't give a damn whether or not it
*actually* understood Chinese (whatever actually understanding Chinese may
mean); I'd just use it for whatever purpose I'd designed it for.

Gavin Steyn
steyn@cs.rochester.edu
"There are times...when one wonders, 'Do pants really matter?'"


