From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!agate!ucbvax!hplabs!hpda!hpcuhb!hpcupt3!svaughan Tue Jan 21 09:27:00 EST 1992
Article 2870 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!agate!ucbvax!hplabs!hpda!hpcuhb!hpcupt3!svaughan
>From: svaughan@hpcupt3.cup.hp.com (Sam H. Vaughan)
Newsgroups: comp.ai.philosophy
Subject: Re: Causes and Reasons (was re: Searle and the Chinese Room)
Message-ID: <45740002@hpcupt3.cup.hp.com>
Date: 16 Jan 92 22:21:21 GMT
Article-I.D.: hpcupt3.45740002
References: <1991Dec14.004745.6550@husc3.harvard.edu>
Organization: Hewlett Packard, Cupertino
Lines: 96

/ hpcupt3:comp.ai.philosophy / chalmers@bronze.ucs.indiana.edu (David Chalmers) / 12:18 pm  Jan 14, 1992 /
In article <1992Jan14.004439.7502@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>The issue in question is
>whether the thesis of mental states' supervenience on computational states
>is sufficient for ensuring the success of strong AI.  My counterexample
>above demonstrates that this is not the case.

The original issue in question is whether the lack of type identities
(and in particular Putnam's arguments for that lack) implies the falsity
of strong AI.  I argued that supervenience on computational states (which
Putnam concedes is compatible with his position) imples the truth of
strong AI.  You pointed out that there is a reading of supervenience on
computational states upon which this is not the case.  I hold that this
sense is an uninteresting sense, as if two objects are identical in *all*
their computational states then they will be more or less identical in
all their physical states too (and so much for multiple realizability).
A more reasonable reading of supervenience here would be that for every
mental state M, then exists an associated computational state C, such
that C cannot occur without M (this is Kim's "strong supervenience", which
is the most common construal of supervenience in the recent literature.  It's
equivalent to other definitions under the assumption of closure of properties
under infinite conjunctions/disjunctions, which doesn't hold for computational
states).  The real moral is that it's vague to talk of supervenience on
computational states, as Putnam does, without further spelling things out.

>Please note that, should the
>above situation obtain, the possibility of AI is indeed ruled out, as in
>the general case it would be impossible to control the mental states of
>your machine by programming its computational states.

Again, strong AI is not an epistemic doctrine.  In any case, I fail to
see how lack of type identities implies the impossibility of programming.
Putnam's argument is that a given mental state can be realized as any one
of a vast disjunction of computational states.  So take any one of those
computational states, and you've got the mental state.

>THis is hardly a terminological difference, and you are altogether wrong.
>The thesis of anomalous monism "denies that there can be strict laws
>connecting the mental and the physical" (Davidson, p.212); in other words,
>it's an even stronger claim than the one I presented above.  So the
>regularity in question must be _lawlike_; and if this isn't an epistemic
>criterion, I don't know what is.

Davidson's position has been widely criticized on the grounds that any
reasonable version of supervenience implies some nomic connection, if
not a strong one (see e.g. Kim, "Psychophysical laws", in _Action and
Events_, or Honderich, "Lawlike psychophysical connections and their
problems", Inquiry 24:277-303, 1981).  The only way for Davidson to
retain strong anomalism (and some have attributed this position to him
indirectly, though he has not written on the topic since 1974), is to
hold that the supervenience conditional "if P then M" should be
interpreted as a universal material conditional without modal force.
But this seems far too weak: presumably the counterfactual "if something
were P, it would be M" is the real interest behind the supervenience claim
(it's certainly what everybody who has written on the subject since Davidson
has taken supervenience to come to).  And once one has this modal
force, one has a nomic connection.

A more charitable interpretation of Davidson is to argue that there can
be nomic psychophysical implications of the kind implied by supervenience,
but no nomic reductions, e.g. of the form "if M, then P", and in particular
no reductions of intentional attributions in the standard vocabulary (i.e.
weak anomalism without strong anomalism, in the terminology used earlier).

>To reiterate (please try to address my point this time around): note that
>you are defining an isomorphism between causes and reasons, i.e. between
>the physical structure of the system and the logical structure of the FSA.

It's not clear to me that the FSA program has a logical structure.  I see
it as an entirely syntactic object.  In particular I certainly
don't interpret "S1 -> S2" as a logical conditional.  However, if you
want to interpret it as such, it probably doesn't hurt, as long as you
respect the rules for determining when a physical system is an
implementation of the FSA program -- and here the only relevant properties
of the program are its syntactic properties.

>In other
>words, the program itself is incapable of formalizing the causal structure
>of the machine executing it, since the burden of determining this structure
>is borne by whoever is ensuring the correctness of the implementation, and
>since equally correct implementations may in practice result in different
>causal structures.

If something is implementing the FSA "S1->S2, S2->S1", then it has two
states that cause each other, and so on for more complex FSA's.  This
equivalence in causal structure is guaranteed by the definition of
implementation (though the two systems may of course possess further causal
structure that differs).  I don't see the relevance of your appeals to
intensionality.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."
----------


