From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!brunix!cgy Tue Nov 26 12:32:04 EST 1991
Article 1557 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10584 sci.philosophy.tech:1092 comp.ai.philosophy:1557
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!brunix!cgy
>From: cgy@cs.brown.edu (Curtis Yarvin)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <94092@brunix.UUCP>
Date: 25 Nov 91 02:52:45 GMT
References: <MATT.91Nov24000158@physics.berkeley.edu> <94066@brunix.UUCP> <1991Nov24.201501.5845@husc3.harvard.edu>
Sender: news@brunix.UUCP
Organization: Brown University Department of Computer Science
Lines: 21

In article <1991Nov24.201501.5845@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>In article <94066@brunix.UUCP> 
>cgy@cs.brown.edu (Curtis Yarvin) writes:
>
>>Unless I am terribly confused about Searle's point in the "Chinese room"
>>argument, it stems from a simplistic confusion of software and hardware. 
>
>In "Minds, Brains, and Programs" Searle explicitly says: "let the
>individual internalise all of these elements of the system. [...]  All the
>same, he understands nothing of the Chinese, and *a fortiori* neither does
>the system, because there isn't anything in the system that isn't in him."

It's irrelevant whether the instructions are on paper, or in his memory (as
indeed they must be, if he is going to execute them.)  My analogy still
holds.  Human brain : CPU and DRAMs.  Surely nobody would claim that a DRAM
understands Chinese.

However, I would doubt that anyone who didn't know Chinese could write the
instructions.

c


