From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Dec 16 11:01:55 EST 1991
Article 2123 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2123 sci.philosophy.tech:1409
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons (was re: Searle and the Chinese Room)
Message-ID: <1991Dec14.181000.3907@bronze.ucs.indiana.edu>
Date: 14 Dec 91 18:10:00 GMT
References: <1991Dec13.044040.20059@psych.toronto.edu> <1991Dec13.064817.13637@bronze.ucs.indiana.edu> <1991Dec14.004745.6550@husc3.harvard.edu>
Organization: Indiana University
Lines: 53

In article <1991Dec14.004745.6550@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>Computers which implement a theorem-proving program *have* the relation of
>logical consequence physically.  This is nonsensical because of the last
>adjective; yet even should you dispense with the claim of physical
>embodiment, your computer is not going to have the relation of logical
>consequence in *any* sense of "have", pace G\"odel.

I couldn't care less about logical consequence in this context.  I'm talking
about real causation.  In particular the real physical causation that
is going on within any implementation of a given computer program, whose
abstract structure (at a certain level) is shared between all
implementations of that program.

>On the contrary, to see the arguments arainst functionalism advanced by its
>inventor, Hilary Putnam, check out his book "Representation and Reality".
>Of course, if one is to believe your bibliography, you must have read it
>and found the arguments unworthy of your attention.

Irrelevant to the present discussion.  If Putnam's arguments
succeed, they show only that mental states cannot be type-identified
with functional states.  But a token identity, or even mere
supervenience, is all that AI requires.  Even Putnam concedes that

  "mental states...are emergent from and may be supervenient
   on our computational states." (p. xiii)

>"To confuse a reason of knowledge, lying within
>a given concept, with a cause acting from without, is always his
>[Spinoza's] artifice, which he has learned from Descartes." ("On the
>Fourfold Root of the Principle of Sufficient Reason", 8.)

Computation may or may not provide a good formalization of "reasons".
However, for present purposes I'm only concerned with physical causation.
When I construct a computational model of a neural network for instance,
to look as it as a formalization of relations of logical consequence
between neurons would be patently absurd.  But to look at it as a
formalization of the causal organization of those neurons makes much
more sense.

>you might have learned the distinction between active and
>passive powers, and asked yourself just what sort of mechanism would endow
>your Turing machine with the former sort of causal powers.

I'm pleased to hear someone take this line.  I've often accused various
AI opponents of relying on the distinction between active and passive
causation to do their work for them, but no-one's owned up to it
until now.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


