From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Dec 16 11:00:33 EST 1991
Article 1978 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Chinese Room, from a different perspective
Message-ID: <1991Dec9.172934.4032@psych.toronto.edu>
Keywords: ai philosophy searle expert system
Organization: Department of Psychology, University of Toronto
References: <71692@nigel.ee.udel.edu> <5732@skye.ed.ac.uk> <1991Dec9.123757.29236@wpi.WPI.EDU>
Date: Mon, 9 Dec 1991 17:29:34 GMT

In article <1991Dec9.123757.29236@wpi.WPI.EDU> ancona@wpi.WPI.EDU (James P Ancona) writes:

>I think a real weakness of Searle's argument is that his Chinese
>Room system has no memory.  I don't think that any strong AI proponent
>would argue that one could create an intelligent system with simply a
>static program and a CPU.  You would have to have memory containing
>modifiable data structures.  When the system is able to learn and modify
>itself, it becomes more than 'a man, a room and a few slips of paper' (to
>paraphrase Searle, since I don't have a copy in front of me).

OK, the man now has a pencil and paper.  Where's the understanding?

- michael


