From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tarpit!cs.ucf.edu!schnitzi Mon Dec  9 10:47:39 EST 1991
Article 1822 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tarpit!cs.ucf.edu!schnitzi
>From: schnitzi@cs.ucf.edu (Mark Schnitzius)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Keywords: ai philosophy searle expert system
Message-ID: <schnitzi.691688857@eola.cs.ucf.edu>
Date: 2 Dec 91 15:47:37 GMT
References: <1991Nov7.151439.3353@osceola.cs.ucf.edu> <1991Nov11.011527.28514@midway.uchicago.edu> <70105@nigel.ee.udel.edu> <5698@skye.ed.ac.uk>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
Lines: 44

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>In article <70105@nigel.ee.udel.edu> lintz@cis.udel.edu (Brian Lintz) writes:

>>I always thought basically the same thing. Searle starts with the
>>premise that the man doesn't know Chinese, and ends with the 
>>conclusion that the man doesn't know Chinese. He assumed what he
>>intended to prove. 

>Sigh.  Your paraphrase of the argument is faulty.

>Searle starts with the premise that the man doesn't know Chinese.
>Then the man gets a bunch of rules that let him answer queries
>in Chinese.  And the man still doesn't know Chinese.  Conclusion:
>the rules didn't give the man the ability to understand Chinese.

>The circularity you identify above is simply not there, though
>perhaps some other circularity is.

Perhaps this is it (and this has always been my argument with
the Chinese Room analogy).  The question of whether the man knows
Chinese is simply irrelevant to the argument.  The man is simply
part of the system, where the man plus the rules define the system.
Saying that the man does not know Chinese is like saying that a
brain cell in your brain is not intelligent, or that a computer
which happens to be running an intelligent program is not of itself
intelligent.  The system as a whole must be considered.

It may be useful to consider an extension of the Chinese Room
thought experiment wherein you have a great many men in a great
number of rooms, each simulating a brain cell, or better yet, a
quantum level interaction (by rolling dice, maybe?).  If this
could somehow be pulled off (and I'm not saying it can, even
if we ignore the practical aspects) would a man in any one room
be considered intelligent?  Of course not, because he has no
view of the "big picture" -- he is only part of it.  Yet, back
to the Chinese Room, we are usually asked to consider whether
the man has any understanding of Chinese.  This is just confusing
the issue.

-------------------------
Mark Schnitzius
University of Central Florida
schnitzi@eola.cs.ucf.edu


