From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!news.cs.indiana.edu!bronze!chalmers Mon Dec 16 11:02:06 EST 1991
Article 2141 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2141 sci.philosophy.tech:1425
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <1991Dec15.201231.19710@bronze.ucs.indiana.edu>
Date: 15 Dec 91 20:12:31 GMT
References: <1991Dec14.004745.6550@husc3.harvard.edu> <1991Dec14.181000.3907@bronze.ucs.indiana.edu> <1991Dec15.120726.6592@husc3.harvard.edu>
Organization: Indiana University
Lines: 51

In article <1991Dec15.120726.6592@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>In article <1991Dec14.181000.3907@bronze.ucs.indiana.edu> 
>chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>>Irrelevant to the present discussion.  If Putnam's arguments
>>succeed, they show only that mental states cannot be type-identified
>>with functional states.  But a token identity, or even mere
>>supervenience, is all that AI requires.  Even Putnam concedes that
>>
>>  "mental states...are emergent from and may be supervenient
>>   on our computational states." (p. xiii)
>
>You claimed functionalism; feel free to explain how your views would
>be helped by the weaker theses of emergence or supervenience.

This should be absurdly obvious, but I'll spell it out.  AI is not in
the business of providing computational descriptions under which *all*
mental states of a given type (beliefs, say) can be subsumed.  For all
we know, humans, Martians, and angels may have radically different
functional organizations (and this is all that Putnam's argument comes
down to), but that's fine: we'll model each of them separately.

All AI needs is the claim that replication of functional organization
guarantees that you will replicate the associated mental states --
and this is precisely what supervenience comes to.

>The notion of formalization of physical causal structure by another
>physical structure sounds patently absurd to me.  Formalization is a purely
>linguistic procedure; you might be able to formalize the laws of physical
>causation, but in so doing you would merely express them through the
>syntactical (read `proof-theoretic') representation of the relation of
>logical consequence.  Regardless of all your obfuscatory attempts, you make
>it clear that you purport to reduce a physical process to a syntactic
>representation thereof.  Searle still stands unaffected.

To be more precise, we formalize causal structure in a program; this
program then has the property that any implementation of it will
possess that causal structure.  By analogy: we formalize properties
of cakes in a recipe; then any implementation of that recipe will
possess the relevant properties.  If you don't like talk of
"formalization" here, that's fine: substitute "specification" instead.
The substantive point is unaffected.

In any case, there's certainly no *reduction* of causation to a
syntactic structure.  To claim that would be to miss the vital role
played by implementation.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


