From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!kuhub.cc.ukans.edu!husc-news.harvard.edu!zariski!zeleny Tue Jan 21 09:27:15 EST 1992
Article 2899 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2899 sci.philosophy.tech:1901
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!kuhub.cc.ukans.edu!husc-news.harvard.edu!zariski!zeleny
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <1992Jan19.164650.7804@husc3.harvard.edu>
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Date: 19 Jan 92 16:46:48 EST
References: <1992Jan10.004011.23299@bronze.ucs.indiana.edu> 
 <1992Jan14.004439.7502@husc3.harvard.edu> <1992Jan14.201839.28881@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Nntp-Posting-Host: zariski.harvard.edu
Lines: 183

In article <1992Jan14.201839.28881@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1992Jan14.004439.7502@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>The issue in question is
>>whether the thesis of mental states' supervenience on computational states
>>is sufficient for ensuring the success of strong AI.  My counterexample
>>above demonstrates that this is not the case.

DC:
>The original issue in question is whether the lack of type identities
>(and in particular Putnam's arguments for that lack) implies the falsity
>of strong AI.  I argued that supervenience on computational states (which
>Putnam concedes is compatible with his position) imples the truth of
>strong AI.  You pointed out that there is a reading of supervenience on
>computational states upon which this is not the case.  I hold that this
>sense is an uninteresting sense, as if two objects are identical in *all*
>their computational states then they will be more or less identical in
>all their physical states too (and so much for multiple realizability).

Excuse me?  You may hold whatever is within your grasp, however in order
for your holdings to be of any relevance to this conversation, they must be
accompanied by a convincing argument.  So far, you have failed to convince
either Jeff Dalton, or me.  Incidentally, what do you mean by `more or less
identical'? are you perchance suggesting that physical states are "sorta
supervenient" on computational states, i.e. that the universe is, _au
fond_, nothing but a big computer?  For this is, in effect, a direct
consequence of your claims that, given correct implementation, program
structure formalizes physical causal structure.  In other words, what you
seem to be saying is that computability by a Turing machine is not only the
paradigm of physical realizability, as John Baez would have me believe, but
that the computational model uniquely determines physical reality.  Haven't
you got it backwards?

DC:
>A more reasonable reading of supervenience here would be that for every
>mental state M, then exists an associated computational state C, such
>that C cannot occur without M (this is Kim's "strong supervenience", which
>is the most common construal of supervenience in the recent literature.  It's
>equivalent to other definitions under the assumption of closure of properties
>under infinite conjunctions/disjunctions, which doesn't hold for computational
>states).  The real moral is that it's vague to talk of supervenience on
>computational states, as Putnam does, without further spelling things out.

Good grief! now you want an isomorphism between mental states and a subset
of computational states; but on the assumption of "sorta supervenience", we
get back to "sorta functionalism", which has been sorta refuted by Putnam.
No, the real moral of this story is that you shouldn't put your premisses
in your opponent's mouth.

MZ:
>>Please note that, should the
>>above situation obtain, the possibility of AI is indeed ruled out, as in
>>the general case it would be impossible to control the mental states of
>>your machine by programming its computational states.

DC:
>Again, strong AI is not an epistemic doctrine.  

Yes it is: if your mind is to your brain like a program is to a computer,
but this program is intrinsically ineffable, then strong AI is doomed to
failure. 

DC:
>                                               In any case, I fail to
>see how lack of type identities implies the impossibility of programming.
>Putnam's argument is that a given mental state can be realized as any one
>of a vast disjunction of computational states.  So take any one of those
>computational states, and you've got the mental state.

Suppose that the set of these computational states is characterized by an
analytic property, and that they are realized by a set of physical states
that is dense in the directed graph space of physical causality.  THen you
can't avoid them, no matter how you program your machine.

MZ:
>>THis is hardly a terminological difference, and you are altogether wrong.
>>The thesis of anomalous monism "denies that there can be strict laws
>>connecting the mental and the physical" (Davidson, p.212); in other words,
>>it's an even stronger claim than the one I presented above.  So the
>>regularity in question must be _lawlike_; and if this isn't an epistemic
>>criterion, I don't know what is.

DC:
>Davidson's position has been widely criticized on the grounds that any
>reasonable version of supervenience implies some nomic connection, if
>not a strong one (see e.g. Kim, "Psychophysical laws", in _Action and
>Events_, or Honderich, "Lawlike psychophysical connections and their
>problems", Inquiry 24:277-303, 1981).  The only way for Davidson to
>retain strong anomalism (and some have attributed this position to him
>indirectly, though he has not written on the topic since 1974), is to
>hold that the supervenience conditional "if P then M" should be
>interpreted as a universal material conditional without modal force.
>But this seems far too weak: presumably the counterfactual "if something
>were P, it would be M" is the real interest behind the supervenience claim
>(it's certainly what everybody who has written on the subject since Davidson
>has taken supervenience to come to).  And once one has this modal
>force, one has a nomic connection.

You keep saying that, and I keep explaining that you are wrong: anecessary
connection is a matter of ontology, and is necessary, but not sufficient
for the existence of a nomological connection.  For instance, the necessary
connection may obtain between events of such complexity that it cant be
characterized in a finite language.  This situation sure looks anomalous to
me.  Are you really prepared to maintain as an incorrigible, irrefutable
article of faith that the universe cannot have any regularities that we
would be intrinsically unable to characterize by finite means?

DC:
>A more charitable interpretation of Davidson is to argue that there can
>be nomic psychophysical implications of the kind implied by supervenience,
>but no nomic reductions, e.g. of the form "if M, then P", and in particular
>no reductions of intentional attributions in the standard vocabulary (i.e.
>weak anomalism without strong anomalism, in the terminology used earlier).

Charitable to your position, but not to his.  In any case, as I said over
and over, in the context of this discussion, I don't give a flying fuck
about the personal views of Putnam, Searle, Davidson, and other luminaries;
if I use their names, you should interpret this use as shorthand reference
to identifiable arguments (not mere positions), which ought to be discussed
by our own means.

MZ:
>>To reiterate (please try to address my point this time around): note that
>>you are defining an isomorphism between causes and reasons, i.e. between
>>the physical structure of the system and the logical structure of the FSA.

DC:
>It's not clear to me that the FSA program has a logical structure.  I see
>it as an entirely syntactic object.  In particular I certainly
>don't interpret "S1 -> S2" as a logical conditional.  However, if you
>want to interpret it as such, it probably doesn't hurt, as long as you
>respect the rules for determining when a physical system is an
>implementation of the FSA program -- and here the only relevant properties
>of the program are its syntactic properties.

Better yet.  Logical structure, as you are well aware, is intensional with
respect to syntactical structure.  I was merely trying to give you a free
gift of unearned determinateness.

MZ:
>>In other
>>words, the program itself is incapable of formalizing the causal structure
>>of the machine executing it, since the burden of determining this structure
>>is borne by whoever is ensuring the correctness of the implementation, and
>>since equally correct implementations may in practice result in different
>>causal structures.

DC:
>If something is implementing the FSA "S1->S2, S2->S1", then it has two
>states that cause each other, and so on for more complex FSA's.  This
>equivalence in causal structure is guaranteed by the definition of
>implementation (though the two systems may of course possess further causal
>structure that differs).  I don't see the relevance of your appeals to
>intensionality.

Correction: it will have two computational states, which may correspond to
any number of physical states (this is where intensionality comes in).
Unless, that is, you really think that the world is a computer, and that
you can discover its program while remaining within it.

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`




