From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!crackers!m2c!wpi.WPI.EDU!ancona Mon Dec  9 10:49:07 EST 1991
Article 1972 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!crackers!m2c!wpi.WPI.EDU!ancona
>From: ancona@wpi.WPI.EDU (James P Ancona)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Keywords: ai philosophy searle expert system
Message-ID: <1991Dec9.123757.29236@wpi.WPI.EDU>
Date: 9 Dec 91 12:37:57 GMT
References: <5698@skye.ed.ac.uk> <71692@nigel.ee.udel.edu> <5732@skye.ed.ac.uk>
Organization: Worcester Polytechnic Institute
Lines: 27

In article <5732@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>I'd say you have it backwards.  The unlikely assumption is simply the
>strong AI that Searle is trying to refute.  The book is the program
>that's supposed to be sufficient for understanding.  If you want to
>attack Searle's argument at that point, you have to argue that the
>book of rules is not a fair representative for a program.
>
>-- jd
I think a real weakness of Searle's argument is that his Chinese
Room system has no memory.  I don't think that any strong AI proponent
would argue that one could create an intelligent system with simply a
static program and a CPU.  You would have to have memory containing
modifiable data structures.  When the system is able to learn and modify
itself, it becomes more than 'a man, a room and a few slips of paper' (to
paraphrase Searle, since I don't have a copy in front of me).

In other words, as the man becomes a smaller part of the 'Chinese Room 
system', it becomes easier to believe that even though the man doesn't
understand Chinese, the system does.

Jim


-- 
Jim Ancona                     | Internet:     ancona@wpi.wpi.edu
                               | Packet Radio: n1adj@ka1mf.ampr.org
Opinions expressed are my own! | 


