From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!wupost!uunet!mcsun!uknet!edcastle!cam Tue Jan 21 09:26:25 EST 1992
Article 2803 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!wupost!uunet!mcsun!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Causes and Reasons
Message-ID: <16624@castle.ed.ac.uk>
Date: 16 Jan 92 11:25:52 GMT
References: <1992Jan14.005053.15003@oracorp.com>
Organization: Edinburgh University
Lines: 33

In article <1992Jan14.005053.15003@oracorp.com> daryl@oracorp.com writes:

>Searle certainly does claim that what is missing in
>a computer simulation of intelligence is "causal powers", but he fails
>to clarify *what* causal powers are missing. A robot with sensor
>certainly has some causal powers, so his argument that mere syntax
>cannot have such powers does not apply in the case of the robot.

>Obviously, Searle doesn't mean "causal powers" in the simplistic way I
>am interpreting them. He means them in the sense of "whatever it is that
>humans have that computers don't".

Not so strong. He means by these specific causal powers no more than
what is necessary to produce proper understanding. Nor does he forever
deny these causal powers to computers, he merely claims that no
computer could acquire such powers simply by virtue of symbolic
computation, syntactical transformations, or running the right kind of
program. He does not deny that computers in general do not have causal
powers, merely that they don't inherently have the kind required for
causing understanding, nor can they acquire them by the right kind of
program, etc. as above. He does not deny that they may acquire them by
some other means. He does argue that _simply_ adding sensors to a
Chinese Room type of device won't do.

I agree. I think that the complexity imposed by the addition of
sensors in such a way as to provide semantics, understanding, etc.,
will at the very least require such extensive modification of the CR
scenario as to render it no longer a recognisably important component
of the resultant architecture.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


