From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!math.fu-berlin.de!news.netmbx.de!unido!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:55:09 EST 1992
Article 4411 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!math.fu-berlin.de!news.netmbx.de!unido!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply II
Message-ID: <6389@skye.ed.ac.uk>
Date: 11 Mar 92 18:45:12 GMT
References: <1992Mar6.193001.20994@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 23

In article <1992Mar6.193001.20994@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>The Systems Reply, in my opinion, is not a debating tactic on the part
>of computationalists; it is at the heart of computationalism.

It's interesting that so much of the net time is spent arguing,
not that any system that's running the right program understands,
but that any system with the right behavior understands.

I suspect that part of the reason for this is that it's difficult
to find good arguments for the claim about running the right program.

>Physical objects do not possess (unique) mental properties, but
>systems do.

And physical objects can't be systems?  I don't even know what
your claim means.

>Thus the Systems Reply doesn't prove the Strong AI position, but it
>does show that Searle's Chinese Room argument (without supplementary
>arguments) has no force in disproving the Strong AI position.

I'd agree that it shows Searle has not proved his conclusion,
but I don't think it requires an elaborate argument to show that.


