From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl Thu Jan 16 17:20:08 EST 1992
Article 2684 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Causes and Reasons
Message-ID: <1992Jan14.005053.15003@oracorp.com>
Organization: ORA Corporation
Date: Tue, 14 Jan 1992 00:50:53 GMT

Jeff Dalton writes:

>>If Searle claims that what is missing in a syntactic simulation of a
>>mind is that there is no causal connection between the words being
>>manipulated and the real-world objects to which the words refer, then
>>the use of sensors changes things significantly. 

> But that isn't what he's claims.  He discusses the case of adding
> manipulators, sensors, and so forth, and argues that it doesn't
> help.

He doesn't argue that it doesn't help, he simply asserts that it
doesn't help, just like he asserted that there was no understanding in
the Chinese Room. Searle certainly does claim that what is missing in
a computer simulation of intelligence is "causal powers", but he fails
to clarify *what* causal powers are missing. A robot with sensor
certainly has some causal powers, so his argument that mere syntax
cannot have such powers does not apply in the case of the robot.

Obviously, Searle doesn't mean "causal powers" in the simplistic way I
am interpreting them. He means them in the sense of "whatever it is that
humans have that computers don't".

Daryl McCullough
ORA Corp.
Ithaca, NY
 






