From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Mon Dec 16 11:02:13 EST 1991
Article 2152 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2152 sci.philosophy.tech:1434
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Summary: causality cannot be finitely specified
Keywords: reasons and causes
Message-ID: <1991Dec16.002259.6621@husc3.harvard.edu>
Date: 16 Dec 91 05:22:56 GMT
Article-I.D.: husc3.1991Dec16.002259.6621
References: <1991Dec14.181000.3907@bronze.ucs.indiana.edu> 
 <1991Dec15.120726.6592@husc3.harvard.edu> <1991Dec15.201231.19710@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 122
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec15.201231.19710@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec15.120726.6592@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>>In article <1991Dec14.181000.3907@bronze.ucs.indiana.edu> 
>>chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

DC:
>>>Irrelevant to the present discussion.  If Putnam's arguments
>>>succeed, they show only that mental states cannot be type-identified
>>>with functional states.  But a token identity, or even mere
>>>supervenience, is all that AI requires.  Even Putnam concedes that
>>>
>>>  "mental states...are emergent from and may be supervenient
>>>   on our computational states." (p. xiii)

MZ:
>>You claimed functionalism; feel free to explain how your views would
>>be helped by the weaker theses of emergence or supervenience.

DC:
>This should be absurdly obvious, but I'll spell it out.  AI is not in
>the business of providing computational descriptions under which *all*
>mental states of a given type (beliefs, say) can be subsumed.  For all
>we know, humans, Martians, and angels may have radically different
>functional organizations (and this is all that Putnam's argument comes
>down to), but that's fine: we'll model each of them separately.

This is absurdly obvious, and you are absurdly wrong.  Putnam's argument
has a far wider scope, denying *in principle* the type-identity of
functional and mental states, which is a necessary condition for the sort
of nomological monism you need to espouse in order to take strong AI
seriously.

DC:
>All AI needs is the claim that replication of functional organization
>guarantees that you will replicate the associated mental states --
>and this is precisely what supervenience comes to.

Not so; in order to be able to program your creature, what you also need is
the ability to identify the type of mental state associated with a given
functional state.  Supervenience alone doesn't guarantee this outcome, as
the relevant relation itself may very well be anomalous.

MZ:
>>The notion of formalization of physical causal structure by another
>>physical structure sounds patently absurd to me.  Formalization is a purely
>>linguistic procedure; you might be able to formalize the laws of physical
>>causation, but in so doing you would merely express them through the
>>syntactical (read `proof-theoretic') representation of the relation of
>>logical consequence.  Regardless of all your obfuscatory attempts, you make
>>it clear that you purport to reduce a physical process to a syntactic
>>representation thereof.  Searle still stands unaffected.

DC:
>To be more precise, we formalize causal structure in a program; this
>program then has the property that any implementation of it will
>possess that causal structure.  By analogy: we formalize properties
>of cakes in a recipe; then any implementation of that recipe will
>possess the relevant properties.  If you don't like talk of
>"formalization" here, that's fine: substitute "specification" instead.
>The substantive point is unaffected.

You know, Dave, during two months of conducting this, -- let's not mince
words, -- flame war, one of the greatest pleasures I've experienced had to
do with encountering a great variety of interlocutors.  There have been
people like Jeff Dalton and David Gudeman, who have no difficulty
understanding what I am talking about; there have been people like John
McCarthy, who appreciate the logical and mathematical issues involved, and
consequently are prepared to stand corrected on some issues in spite of
their vested interest in the AI industry; and then there's you.  Either you
really have no clue, or your mind has become too ossified to allow you to
see what's going on.  Still, I'll make one last attempt, more for the
audience's benefit.  Suppose that you have managed to formalize a certain
causal structure in a program; this program then has the property that any
*correct* interpretation thereof will possess that causal structure;
however the burden of ensuring the correctness of that causal structure is
borne by the agent performing the interpretation of the program.  (Do study
some logic; the issue of semantical indeterminacy should be understandable
by anyone with elementary grasp of e.g. L\"owenheim-Skolem theorem.)  In
other words, if an arbitrary implementation of your program should possess
the requisite causal structure, it would be solely in virtue of the
correctness of the implementation, and not because of any property of the
program itself.  Given the right interpretation, any program can be made to
mean anything at all.  To exploit your analogy, when we specify the
properties of cakes in a recipe, there's no way to ensure that any
implementation of that recipe will possess the relevant properties
independently of the skill of the cook.  You are wrong.  Give it up.

DC:
>In any case, there's certainly no *reduction* of causation to a
>syntactic structure.  To claim that would be to miss the vital role
>played by implementation.

The burden of circumscribing causal relations is borne by the semantics of
your program specification; since every operation of a Turing machine
reduces to a purely syntactic manipulation, the operation of the said
Turing machine cannot determine the semantical properties of the program.
Consequently, any understanding that may have produced the said program
cannot, in principle, be captured by it, though it may very well be
interpreted by a human agent perusing it.  This is a natural consequence of
Searle's position, which is incontrovertible as stated.  Any failure to
understand this point is due to the addressee.

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


