From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Thu Dec 26 23:58:29 EST 1991
Article 2396 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2396 sci.philosophy.tech:1610
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Summary: what is an implementation?
Keywords: look up `nomological', `anomalous', and `type'
Message-ID: <1991Dec24.014716.6901@husc3.harvard.edu>
Date: 24 Dec 91 06:47:14 GMT
Article-I.D.: husc3.1991Dec24.014716.6901
References: <1991Dec23.210052.25960@bronze.ucs.indiana.edu> 
 <1991Dec23.185045.6898@husc3.harvard.edu> <1991Dec24.020441.8340@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 193
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec24.020441.8340@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec23.185045.6898@husc3.harvard.edu> 
>zeleny@brauer.harvard.edu (Mikhail Zeleny) writes:

DC:
>>>Your last post on the subject demonstrated a sufficiently gross
>>>misunderstanding of the notions of supervenience and implementation
>>>that there was little point in continuing the discussion.

MZ:
>>I see my use of the term `supervenience' as wholly consistent with its
>>introduction in modern philosophical discourse by R.M.Hare, as well as its
>>use in the philosophy of mind by Donald Davidson; should you care to
>>substantiate your accusation, I would be happy to supply a more precise
>>reference to the sorces.

DC:
>Well, it's sometimes difficult to tell the difference between a gross 
>misunderstanding of a term, and a correct understanding combined with
>wildly fallacious inferences.  I therefore concede that it is possible
>that you understand the meaning of "supervenience".  Either way,
>your suggestion that lack of type-type nomological regularities implies
>inability to circumscribe a supervenience base is just silly, as
>evidenced by the simple case of supervenience of mental states on
>brain states (there may be no type-type laws, but we can circumscribe
>a supervenience base without problems; just take the entire brain).

Thank you very little for your small concession; let's go back a bit.  You
claimed functionalism; when I suggested that Putnam, whose book you
included in your annotated bibliography, has given a pretty convincing
refutation thereof, you retreated, claiming that if Putnam's arguments
succeed, they show only that mental states cannot be type-identified with
functional states; however all that the theses of strong AI call for is
token identity, or even mere supervenience.  Since Putnam concedes that
"mental states...are emergent from and may be supervenient on our
computational states." ("Representation and Reality", p.xiii), you conclude
that his argument is irrelevant to determining the truth of your views.  

At that point I challenged you to explain how strong AI would be helped by
the weaker theses of emergence or supervenience.  You replied: "AI is not
in the business of providing computational descriptions under which *all*
mental states of a given type (beliefs, say) can be subsumed.  For all we
know, humans, Martians, and angels may have radically different functional
organizations (and this is all that Putnam's argument comes down to), but
that's fine: we'll model each of them separately."  I noted that Putnam's
argument has a far wider scope than you consider, denying *in principle*
the type-identity of functional and mental states, which is a necessary
condition for the sort of nomological monism you need to espouse in order
to take strong AI seriously.  You came back with: "All AI needs is the
claim that replication of functional organization guarantees that you will
replicate the associated mental states -- and this is precisely what
supervenience comes to."  

My reply was that, in order to be able to program your artificial creature,
what you also needed is the ability to identify the type of mental state
associated with a given functional state.  It is evident that supervenience
alone doesn't guarantee this outcome, as the relevant relation itself may
very well be anomalous.  Note the significance of this point; so far you've
given no sign of having appreciated it.  So you continued: "Type identities
are utterly irrelevant to AI.  To bring this out further: it's been
accepted for years that there can be no type identity between mental state
types and physical state types (because of multiple realization); but this
makes not a whit of difference for a hypothetical synthetic neuroscientist
who's into building brains.  Get the brain-state right, and you'll get the
mental state right."  

Already your use of the definite description in the end presupposed a
nomological correspondence between the mental and the physical; so I
remarked that, assuming supervenience, if you get the brain-state right,
and you'll get *some* mental state right.  However, without type-identity,
you can't have a nomological regularity between the two, and so would be
unable to program the latter; in other words, if the calculation of 7 + 5 =
12 is realized at the mental level, you won't succeed in programming it.
You persisted in your illicit assumption of nomological monism: "Assuming
supervenience: to "program" a calculation of 7+5=12, you simply find the
relevant states in the supervenience base of a given instance of that
calculation, and duplicate them", indeed, bringing your misunderstanding to
the fore: "Anomalism is irrelevant here."  

I replied then what I will elaborate now: in principle and by definition,
an anomalous connection will give you no regular, rule-determinable
regularity (`nomos' is Greek for law or convention) between brain-states
and mental states; and the lack of such regularity will prevent you from
circumscribing "the supervenience base" (i.e. a well-defined set of brain
states) of *any* given instance of a calculation.

Note that my claim, in granting your assumption of supervenience of mental
states on brain states, but denying the existence of type-type laws, denies
the possibility that you may identify not only a correspondence between
token-states of brain activity and any given token- or type-state of mental
activity, but also the possibility of estabilishing such a correspondence
between any class of token-states of brain (think of the meaning of `type')
and any given mental state.  "Just take the entire brain" all you want; the
point is that without a nomological connection you simply can't tell what
sets of "entire brain" states are responsible for a given state of mind.
Case closed; give it up.

MZ:
>>If you wish to restrict it in the above way, that's fine, as long as you
>>don't impute a nonexistent causal structure to a piece of paper with marks
>>on it.

DC:
>As I've said about a zillion times: programs don't have causal structure,
>implementations of programs do.

And I've never denied that.

MZ:
>>Once again, the rightness of your causal structure is determined by whoever
>>determines the correctness of implementation.

DC:
>This is just irrelevant.  The origin of the implementation relation doesn't
>matter.  All that matters is that *if* the system is an implementation, then
>it has the right causal structure.

No: if the system is a *correct* implementation, then it has the right
causal structure.  Now define correctness in a non-question-begging way.

MZ:
>>The implementation of a program in a physical system *depends* on the
>>systematic semantic determination of the former, i.e. on the interpretation
>>of the syntax of the language in which it is written; it also depends on
>>the systematic pragmatic determination of the same, i.e. in relating its
>>illocutionary structure (procedure calls) to the physical processes within
>>the computer.  Both, taken together, constitute its implementation.  Both
>>require conscious agency, per my argument elsewhere.

DC:
>I'll resist the temptation to say much about the relation between semantics
>of programming languages and semantics of logical systems (insofar as
>there's an analogy to be made, it's between the stipulated semantics of a
>programming language and the stipulated semantics of the logical operators,
>not the semantics of terms),

I don't understand this comment.  Please elaborate.

DC:
>                             and even about the role of conscious agency
>(if a twin of my Sun workstation miraculously formed from the dust, it
>would still be implementing programs),

Sorry, but this is far too Putnamesque twin-earthish for my taste.
Incidentally, my understanding of a miracle involves the participation of a
conscious supernatural agency.  

DC:
>                                       as they're irrelevant to the main
>point.  Which, as ever, is: *if* a system implements a given program, *then*
>it has a certain causal structure.  That's a conditional.  How the
>antecedent comes to be satisfied is no concern of mine.

This is getting hopeless. State your definition of implementation.

Please keep in mind the claims you made in article
<31821@iuvax.cs.indiana.edu> (19 Dec 89), which you reposted two years
later, in article <1991Dec13.064817.13637@bronze.ucs.indiana.edu>:

DC:
>"Programs" do not think.
>Cognition is not "symbol-manipulation."
>The "hardware/software" distinction is unimportant for thinking about minds.
>
>However:
>
>Systems with an appropriate causal structure think.
>Programs are a way of formally specifying causal structures.
>Physical systems which implement a given program *have* that causal structure,
>physically.  (Not formally, physically.  Symbols were simply an intermediate
>device.)
>Physical systems which implement the appropriate program think.

I repeat: what is implementation?

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


