From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Dec 16 11:02:15 EST 1991
Article 2155 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2155 sci.philosophy.tech:1441
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Causes and Reasons
Message-ID: <1991Dec16.080242.27055@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1991Dec15.120726.6592@husc3.harvard.edu> <1991Dec15.201231.19710@bronze.ucs.indiana.edu> <1991Dec16.002259.6621@husc3.harvard.edu>
Date: Mon, 16 Dec 91 08:02:42 GMT
Lines: 35

In article <1991Dec16.002259.6621@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>This is absurdly obvious, and you are absurdly wrong.  Putnam's argument
>has a far wider scope, denying *in principle* the type-identity of
>functional and mental states, which is a necessary condition for the sort
>of nomological monism you need to espouse in order to take strong AI
>seriously.

Type identities are utterly irrelevant to AI.  To bring this out further:
it's been accepted for years that there can be no type identity between
mental state types and physical state types (because of multiple
realization); but this makes not a whit of difference for a hypothetical
synthetic neuroscientist who's into building brains.  Get the brain-state
right, and you'll get the mental state right.

>Suppose that you have managed to formalize a certain
>causal structure in a program; this program then has the property that any
>*correct* interpretation thereof will possess that causal structure;
>however the burden of ensuring the correctness of that causal structure is
>borne by the agent performing the interpretation of the program.

So?  I'm simply taking it that there's a relation of implementation
that exists between programs and physical systems.  It's a determinate
matter whether a given system implements a given program.  If it does,
then it gets the causal structure right.  How it comes to implement that
program is no concern of mine.  Maybe the implementation comes about
through the action of an intelligent agent, maybe through the role
of a correctly-functioning machine-language interpreter, and maybe
the system arose fully-blown from the dust, and just happens to
implement the program.  It doesn't matter at all.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


