From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!yale!hsdndev!husc-news.harvard.edu!zariski!zeleny Mon Dec 16 11:02:15 EST 1991
Article 2156 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2156 sci.philosophy.tech:1443
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!yale!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <1991Dec16.082402.6631@husc3.harvard.edu>
Date: 16 Dec 91 13:23:59 GMT
References: <1991Dec15.201231.19710@bronze.ucs.indiana.edu> 
 <1991Dec16.002259.6621@husc3.harvard.edu> <1991Dec16.080242.27055@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 75
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec16.080242.27055@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec16.002259.6621@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>This is absurdly obvious, and you are absurdly wrong.  Putnam's argument
>>has a far wider scope, denying *in principle* the type-identity of
>>functional and mental states, which is a necessary condition for the sort
>>of nomological monism you need to espouse in order to take strong AI
>>seriously.

DC:
>Type identities are utterly irrelevant to AI.  To bring this out further:
>it's been accepted for years that there can be no type identity between
>mental state types and physical state types (because of multiple
>realization); but this makes not a whit of difference for a hypothetical
>synthetic neuroscientist who's into building brains.  Get the brain-state
>right, and you'll get the mental state right.

No: assuming supervenience, if you get the brain-state right, and you'll
get some mental state right.  However, without type-identity, you can't
have a nomological regularity between the two, and so would be unable to
program the latter.  In other words, if the calculation of 7 + 5 = 12 is
realized at the mental level, you won't succeed in programming it.

MZ:
>>Suppose that you have managed to formalize a certain
>>causal structure in a program; this program then has the property that any
>>*correct* interpretation thereof will possess that causal structure;
>>however the burden of ensuring the correctness of that causal structure is
>>borne by the agent performing the interpretation of the program.

DC:
>So?  I'm simply taking it that there's a relation of implementation
>that exists between programs and physical systems.  It's a determinate
>matter whether a given system implements a given program.  If it does,
>then it gets the causal structure right.  How it comes to implement that
>program is no concern of mine.  Maybe the implementation comes about
>through the action of an intelligent agent, maybe through the role
>of a correctly-functioning machine-language interpreter, and maybe
>the system arose fully-blown from the dust, and just happens to
>implement the program.  It doesn't matter at all.

Implementation is indeterminate in every meaningful sense of the term.  A
paper shredder can be said to implement a program by accepting a listing
thereof; would it thereby possess the requisite causal structure?  When a
cook follows a recipe, how do you determine his competence in so doing?
Besides, you are missing the main point: a Turing machine is bereft of
causal structure, possessing solely a logical one; and the causal structure
of its physical embodiment is in no way supervenient on the latter.  This
is a straightforward example of intensionality: physical necessity is more
finely grained than the logical sort, and hence cannot be determined from
the latter.  Thus physical structure is not supervenient on the logical
structure, and a program specifying the latter will fail to specify the
former. 

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


