From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Thu Dec 26 23:58:34 EST 1991
Article 2405 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2405 sci.philosophy.tech:1627
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Summary: ontology precedes epistemology
Message-ID: <1991Dec25.015221.6911@husc3.harvard.edu>
Date: 25 Dec 91 06:52:18 GMT
Article-I.D.: husc3.1991Dec25.015221.6911
References: <1991Dec24.014716.6901@husc3.harvard.edu> <1991Dec25.042628.18737@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 258
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec25.042628.18737@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec24.014716.6901@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>You claimed functionalism; when I suggested that Putnam, whose book you
>>included in your annotated bibliography, has given a pretty convincing
>>refutation thereof, you retreated, claiming that if Putnam's arguments
>>succeed, they show only that mental states cannot be type-identified with
>>functional states; however all that the theses of strong AI call for is
>>token identity, or even mere supervenience.

DC:
>No retreat.

We shall see.  At this point I would be most happy to elicit your
commitment to the heuristic search for truth of the matter, rather than an
eristic confrontation.  If you could bring yourself "to be more pleased to
be refuted than to refute -- as much more as being rid oneself of the
greatest evil is better than ridding another of it" ("Gorgias" 458B), this
conversation would be much more productive for both of us.

DC:
>             My original phrasing was "this view (`functionalism', though
>this word is a dangerous one to sling around with its many meanings)".
>Putnam's argument, if it succeeds at all, only refutes one version
>of functionalism, namely the Functional State Identity Theory.  Other
>varieties of functionalism, e.g. supervenient functionalism and
>functionalism about psychological explanation, are unaffected.  Your
>identification of functionalism solely with FSIT indicates your naivete
>in this area.  From context, it should have been quite clear that I
>was invoking supervenient functionalism, not FSIT.

So it was.  Your imputing such identification to me is an error of your
interpretation. 

MZ:
>>I replied then what I will elaborate now: in principle and by definition,
>>an anomalous connection will give you no regular, rule-determinable
>>regularity (`nomos' is Greek for law or convention) between brain-states
>>and mental states; and the lack of such regularity will prevent you from
>>circumscribing "the supervenience base" (i.e. a well-defined set of brain
>>states) of *any* given instance of a calculation.
>>
>>Note that my claim, in granting your assumption of supervenience of mental
>>states on brain states, but denying the existence of type-type laws, denies
>>the possibility that you may identify not only a correspondence between
>>token-states of brain activity and any given token- or type-state of mental
>>activity, but also the possibility of estabilishing such a correspondence
>>between any class of token-states of brain (think of the meaning of `type')
>>and any given mental state.  "Just take the entire brain" all you want; the
>>point is that without a nomological connection you simply can't tell what
>>sets of "entire brain" states are responsible for a given state of mind.

DC:
>OK, it looks like it's time for a tutorial.  First let's make the obvious
>distinction between "strong types" and "weak types".  A strong type is a
>class of mental or physical states subsumed under the usual categories
>from mental or physical vocabulary -- e.g. "belief that P" or "C-fibres
>firing".  A weak type is any class of mental or physical states at all,
>subject only to the condition that it must classify indistinguishable
>state-tokens together.  So "mental state indistinguishable from the one
>I'm in now" is a weak type but not a strong type, and similarly for
>"brain state indistinguishable from the one I'm in now".  Let's call
>a "strong nomological connection" a nomological connection between
>strong brain-state types and strong mental-state types, and similarly
>for a "weak nomological connection" 

Your distinction is mathematically obvious, and was anticipated by me, as
you will see below.  However, references to literature are always welcome;
I would particularly appreciate them in this case.  One more quibble: in
defining a weak type, you should, in accordance with the mathematical
practice, appeal to a notion of a class of mental and physical states
characterized by a well-defined common property, rather than to that of any
equivalence class under token indiscernibility of mental or physical states.

DC:
>Now, there are plenty of good reasons why there may be no strong
>psychophysical nomological connections.  Multiple realizability
>shows that there can't be any simple bidirectional connections,
>and Davidson's subtler arguments, if correct, suggest that there
>may not even be unidirectional connections, due to the differing
>commitments of the mental and physical vocabularies.  

So far, so good.

DC:
>                                                     On the other
>hand, to speak of supervenience without weak nomological connections
>is incoherent.  

Not so.

DC:
>               Therefore, by the principle of charity, when you
>spoke of supervenience without nomological connections I naturally
>assumed you were talking about strong types.  One doesn't assume
>one's correspondent is uttering a contradiction in terms unless
>one is forced to.

Thanks for the attempted charity; alas, your deployment of the hermeneutic
principle only reveals your own conceptual limitation.  Supervenience is an
ontological property; snrong anomalousness, or the absense of weak
nomological connections, on the other hand, is an epistemological
condition.  How Cartesian of you to give priority to epistemology!
However, note that we Fregeans are in no way obliged to follow suit.

DC:
>Above you make it clear, however, that you are talking about weak
>types; and indeed it is true that if there were no weak nomological
>connections, AI would be in a lot of trouble.  However, supervenience
>without weak nomological connections is incoherent.  Recall that we're
>assuming the supervenience of mental states on brain states (whether
>or not this assumption is true is irrelevant) to see what follows.
>Following Davidson's characterization of supervenience (e.g. "The
>Material Mind", p. 250 in _Essays on Actions and Events_), this means
>that it is impossible for two objects with indistinguishable brain
>states to differ in their mental states.  Rephrasing: *necessarily*,
>two objects with indistinguishable brain states have indistinguishable
>mental states.  ("Indistinguishable" here is not an epistemic property;
>it simply means differing in no physical characteristic (for brain
>states) or mental characteristic (for mental states).)

So far, no weak nomological connection is required.

DC:
>Note the modal operator above.  It doesn't matter whether this
>necessity is conceptual, metaphysical, or nomological, as all of
>these imply nomological necessity.

Wrong.  A strong implication construed as metaphysical necessity in no way
presupposes the existence of a nomological connection between the
antecedent and the consequent.  Does the word `ineffable' ring any bells?
Why should I grant you a strong form of explanatory rationsalism?  Where is
it written that every necessary connection in the world will grant us a
licence to discover its inner structure?  Epistemically speaking, we may
very well come to know *that* P --> Q, at the same time coming to know the
impossibility of ever finding out the hows or the whys.  Now, what if it's
the case that P_1 --> Q, P_2 --> Q, ... , P_n --> Q, and there's no way to
define the set {P_1, P_2, ... , P_n} by its membership criteria?

DC:
>                                   So: let us take any given
>brain-state token b (say, that of George Bush at 6pm 12/24/91), and
>the associated mental-state token m.  Let B and M be the corresponding
>weak brain-state and mental-state types (consisting of the classes of
>brain-state and mental-state tokens indistinguishable from b and m
>respectively).  It follows from the above that it is nomologically
>necessary that any two objects that instantiate B also instantiate
>M.  i.e. "if B then M" is a weak nomological connection of precisely
>the kind whose existence you deny.

Ah, but the nomological connection you seek is the one between the
characteristic property of a weak-type brain state that results in the
production of the mental state token m.  Strong anomalousness, which, as
noted above, is wholly compatible with supervenience, will ipso facto deny
you the hope of ever discovering this property.

DC:
>So: nomological connections between weak brain-state and mental-state
>types follow from the very meaning of the claim that mental states
>supervene on brain states.  Furthermore, this inference is essentially
>trivial.  I therefore conclude, as before, that either you fail to
>understand the meaning of "supervenience", or you lack the ability to
>make trivial inferences.

Isn't it time we toned down the chest-thumping?  Being no less arrogant
than you, I appreciate your charming self-confidence; however I note once
again that it is not conducive to the goal of impartial search of truth.

Now for some references.  You will undoubtedly scoff once again at a second
reference to the 1989 "Mind" paper by McGinn, reprinted as the first
chapter of "The Problem of Consciousness".  All's the pity: the same kind
of argument, made *more geometrico* can be found in a 1985 "Erkenntnis"
paper by Putnam, not surprisingly, referenced on pp. xv and 118 of
"Representation and Reality".  Read it and weep.

MZ:
>>No: if the system is a *correct* implementation, then it has the right
>>causal structure.  Now define correctness in a non-question-begging way.

DC:
>"Correct implementation"?  An "incorrect implementation" is simply not
>an implementation, any more than a paper-shredder is.

Fine.

MZ:
>>I repeat: what is implementation?

DC:
>There are many different ways in which one can define implementation,
>but they are all relevantly similar in kind.

Once again, I would appreciate references for all definitions.

DC:
>                                             Start with FSA's.  Take
>a simple FSA "program", e.g. "S1->S2, S2->S3, S3->S1" (I leave aside
>inputs and outputs for simplicity; they are treated in a similar
>fashion).  Then a physical system implements this FSA iff there is
>a partitioning of its states into 3 disjoint classes s1, s2, s3, such
>that its being in s1 causes it to go into s2, and so on.  (Other 
>restrictions may be added, but this part is the core.)

How sweet.  Note that you are defining an isomorphism between causes and
reasons, i.e. between the physical structure of the system and the logical
structure of the FSA.  (Recall my Schopenhauer quotation.)  Now, given that
my earlier thesis of intensionality of physical laws with respect to the
laws of logic is both incontrovertible and largely uncontroversial
(consider the failure of logicism; if mathematics is not reducible to
logic, then, a fortiori, neither is physics; see the discussion of subject
reduction in Popper & Eccles, "The Self and Its Brain", pp. 16--21), all
that you can get in practice is a homomorphism from the former to the
latter.  Whence my earlier conclusion: your notion of implementation is
doing the work of stipulating the causal structure of the physical system;
the program has very little say in it.

DC:
>The story for Turing machines is similar, though I won't spell out
>the details -- it simply involves ensuring a mapping from tape-states
>and possibly head-states to states of the physical system so that
>the state-transitions come out right.  The story for C programs is
>more complex still, but still similar in kind.  Each case involves
>a mapping from abstract states to physical states, and a requirement
                                                    ^^^^^^^^^^^^^^^^^
>that the causal relations between the physical states satisfy certain
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>conditions. 
 ^^^^^^^^^^
Pray tell.


DC:
>            There's really nothing particularly difficult about this,
>so I don't see why you have such trouble with the notion.

Intensionality. 

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


