From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!edcastle!cam Thu Jan 16 17:19:31 EST 1992
Article 2625 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2625 sci.philosophy.tech:1798
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <16386@castle.ed.ac.uk>
Date: 10 Jan 92 14:56:46 GMT
References: <1991Dec24.014716.6901@husc3.harvard.edu> <5918@skye.ed.ac.uk> <1992Jan10.011118.26218@bronze.ucs.indiana.edu>
Organization: Edinburgh University
Lines: 63

In article <1992Jan9.181611.834@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:

>> If Searle is right that without sensory input there is no
>> understanding in computers by virtue of their running the right
>> program, why would adding sensors cause understanding to appear?

>If Searle claims that what is missing in a syntactic simulation of a
>mind is that there is no causal connection between the words being
>manipulated and the real-world objects to which the words refer, then
>the use of sensors changes things significantly. Sensors, together
>with manipulators produce causal relations between the syntactic
>processing inside the machine and what is going on in the real world:
>changes in the world show up as changes in the internal states of the
>machine, and changes in the machine produce changes in the world
>(through the manipulators).

In general I agree, but note that it is almost certainly inadequate
simply to try to plug some sensors thru some signal->symbol translation
box and pipe the output symbols into the creature's world model updating
machinery. That kind of causal connection gives you Searle's Chinese
Room, with nobody at home but the janitor. But that does not mean that
(more complex) causal machinery cannot do the job.  Harnad has argued
this eloquently on the net in past years (a short summary can be found
in his CR paper in the first issue of JETAI). A more complex kind of
causal connection with significantly different properties is given by a
goal-seeking servomechanism. If we then generalise feedback
servomechanisms (e.g. Albus's generalised servos) and pile them up
into a hierarchy where the set points of lower systems are set by higher
systems (e.g. William Powers' perceptual control theory), the whole
hierarchical mechanism operating through layers of increasing "specious
presents", then the kind of relationship existing between "signal" and
"symbol" becomes complex enough to defy analysis in our current poor
state of understanding of these things.  Nevertheless, despite the fact
that it is so gross a simplification as to be seriously misleading to
refer to this kind of relationship between signal and symbol as a
"causal connection" or a "causal chain", it is a connection entirely
mediated by a physical mechanism (counting information processing
machinery as physical mechanism here).

Just as a lot of the confusion in pro- and anti-AI arguments comes from
differing perceptions of how complex and subtle can be the behaviour of
programmed computers, so a great deal of the confusion in this business
of Searle's causal powers -- and whether causal connections can mediate
between a world and the meaning in the mind of a creature living in that
world -- arises from differing perceptions of just how complex and
subtle machinery can be. Most people still think of "machinery" as
ultimately no more than some kind of grandiose clockwork or vast
billiard table of fundamental particles. But 200 years ago Babbage
showed us how even clockwork could be used to host general purpose
information processing, and Maxwell explained the basic principles of
negative feedback servomechanisms. Each of these devices constitutes a
large qualitative change in the subtlety and complexity of behaviour of
which machinery is capable. We are still far from understanding what
they are capable of in massed concert.  But I think that somewhere in
some such deviously enfolded machinery lurks the secret of semantics.

You can no more _simply_ add sensors and get semantics than you can
_simply_ add wind to a pipe and get music. But it can be done.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


