From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!linac!uchinews!spssig.spss.com!markrose Thu Feb 20 15:20:50 EST 1992
Article 3737 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!linac!uchinews!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Feb14.180030.48911@spss.com>
Date: 14 Feb 92 18:00:30 GMT
References: <1992Jan29.190105.25334@aisb.ed.ac.uk> <1992Jan30.001623.12556@bronze.ucs.indiana.edu> <6188@skye.ed.ac.uk>
Organization: SPSS Inc.
Lines: 14
Nntp-Posting-Host: spssrs7.spss.com

In article <6188@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>I do not agree with this sort of move.  Searle presents several
>arguments.  The "classic" Chinese Room is _not_ a brain simulation.
>Maybe you and Searle think it could just as well be a brain
>simulation, but maybe you and Searle are wrong.  To use an argument
>the applies to brain simulation against the classic Chinese Room,
>you have to show that it applies, not just argue that Searle would
>accept it.

To use one of your own favorite arguments, if Searle is right that
intelligence does not come simply by running an algorithm, what does
it change if that algorithm is a brain simulation?  What about Searle's
argument is invalidated because it's running that kind of a program?
Where would the semantics that computers are incapable of come from?


