From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:21:46 EST 1992
Article 2722 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Causes and Reasons
Message-ID: <5983@skye.ed.ac.uk>
Date: 14 Jan 92 22:10:21 GMT
Article-I.D.: skye.5983
References: <1992Jan14.005053.15003@oracorp.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 39

In article <1992Jan14.005053.15003@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>>>If Searle claims that what is missing in a syntactic simulation of a
>>>mind is that there is no causal connection between the words being
>>>manipulated and the real-world objects to which the words refer, then
>>>the use of sensors changes things significantly. 
>
>> But that isn't what he's claims.  He discusses the case of adding
>> manipulators, sensors, and so forth, and argues that it doesn't
>> help.
>
>He doesn't argue that it doesn't help, he simply asserts that it
>doesn't help, just like he asserted that there was no understanding in
>the Chinese Room.

If I had more time, I address your claim that Searle presents
only assertions and not arguments.  In any case, I repeat that
Searle does not claim that what is missing in a syntactic
simulation is that there is no causal conection between the
words and the objects.

> Searle certainly does claim that what is missing in
>a computer simulation of intelligence is "causal powers", but he fails
>to clarify *what* causal powers are missing. A robot with sensor
>certainly has some causal powers, so his argument that mere syntax
>cannot have such powers does not apply in the case of the robot.

Those are not the causal powers Searle is talking about.  As I've
said before the key phrase is "brains cause minds".  Not, BTW,
"brains cause objects to be manipulated" or "minds have causal
connections with objects".

>Obviously, Searle doesn't mean "causal powers" in the simplistic way I
>am interpreting them. He means them in the sense of "whatever it is that
>humans have that computers don't".

That's right, more or less.  But he doesn't claim it's confined
to humans (see his remarks on martians and green slime).


