From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!kuhub.cc.ukans.edu!husc-news.harvard.edu!zariski!zeleny Mon Dec 16 11:02:03 EST 1991
Article 2136 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2136 sci.philosophy.tech:1420
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!kuhub.cc.ukans.edu!husc-news.harvard.edu!zariski!zeleny
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <1991Dec15.120726.6592@husc3.harvard.edu>
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Date: 15 Dec 91 12:07:22 EST
References: <1991Dec13.064817.13637@bronze.ucs.indiana.edu> 
 <1991Dec14.004745.6550@husc3.harvard.edu> <1991Dec14.181000.3907@bronze.ucs.indiana.edu>
Organization: Dada
Keywords: provability vs. logical consequence vs. physical causation 
Summary: a formalization of causation has no causal powers
Nntp-Posting-Host: zariski.harvard.edu
Lines: 104

In article <1991Dec14.181000.3907@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec14.004745.6550@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>Computers which implement a theorem-proving program *have* the relation of
>>logical consequence physically.  This is nonsensical because of the last
>>adjective; yet even should you dispense with the claim of physical
>>embodiment, your computer is not going to have the relation of logical
>>consequence in *any* sense of "have", pace G\"odel.

DC:
>I couldn't care less about logical consequence in this context.  I'm talking
>about real causation.  In particular the real physical causation that
>is going on within any implementation of a given computer program, whose
>abstract structure (at a certain level) is shared between all
>implementations of that program.

The moment you ascend to the abstract structure in question, insofar as you
choose to ignore the physical features of each implementation, you leave
the physical realm.  Is the difference between logical and physical
structure so hard to understand?

MZ
>>On the contrary, to see the arguments against functionalism advanced by its
>>inventor, Hilary Putnam, check out his book "Representation and Reality".
>>Of course, if one is to believe your bibliography, you must have read it
>>and found the arguments unworthy of your attention.

DC:
>Irrelevant to the present discussion.  If Putnam's arguments
>succeed, they show only that mental states cannot be type-identified
>with functional states.  But a token identity, or even mere
>supervenience, is all that AI requires.  Even Putnam concedes that
>
>  "mental states...are emergent from and may be supervenient
>   on our computational states." (p. xiii)

You claimed functionalism; feel free to explain how your views would be
helped by the weaker theses of emergence or supervenience.

MZ:
>>"To confuse a reason of knowledge, lying within
>>a given concept, with a cause acting from without, is always his
>>[Spinoza's] artifice, which he has learned from Descartes." ("On the
>>Fourfold Root of the Principle of Sufficient Reason", 8.)

DC:
>Computation may or may not provide a good formalization of "reasons".
>However, for present purposes I'm only concerned with physical causation.
>When I construct a computational model of a neural network for instance,
>to look as it as a formalization of relations of logical consequence
>between neurons would be patently absurd.  But to look at it as a
>formalization of the causal organization of those neurons makes much
>more sense.

The notion of formalization of physical causal structure by another
physical structure sounds patently absurd to me.  Formalization is a purely
linguistic procedure; you might be able to formalize the laws of physical
causation, but in so doing you would merely express them through the
syntactical (read `proof-theoretic') representation of the relation of
logical consequence.  Regardless of all your obfuscatory attempts, you make
it clear that you purport to reduce a physical process to a syntactic
representation thereof.  Searle still stands unaffected.

MZ:
>>you might have learned the distinction between active and
>>passive powers, and asked yourself just what sort of mechanism would endow
>>your Turing machine with the former sort of causal powers.

DC:
>I'm pleased to hear someone take this line.  I've often accused various
>AI opponents of relying on the distinction between active and passive
>causation to do their work for them, but no-one's owned up to it
>until now.

Once again your bad faith is showing.  I am not "various AI opponents"; if
you want to address a generic interlocutor, feel free to imitate the
practice of Demosthenes.  If you have a problem with the above distinction,
you deny yourself the use of the concept of a rational agent, and hence
that of a person.  Keep that in mind next time you see the "virtual person"
arguments of Drew MacDermott.

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`





