From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Thu Dec 26 23:58:31 EST 1991
Article 2400 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2400 sci.philosophy.tech:1618
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Causes and Reasons
Message-ID: <1991Dec25.042628.18737@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1991Dec24.014716.6901@husc3.harvard.edu>
Date: Wed, 25 Dec 91 04:26:28 GMT
Lines: 127

In article <1991Dec24.014716.6901@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>You claimed functionalism; when I suggested that Putnam, whose book you
>included in your annotated bibliography, has given a pretty convincing
>refutation thereof, you retreated, claiming that if Putnam's arguments
>succeed, they show only that mental states cannot be type-identified with
>functional states; however all that the theses of strong AI call for is
>token identity, or even mere supervenience.

No retreat.  My original phrasing was "this view (`functionalism', though
this word is a dangerous one to sling around with its many meanings)".
Putnam's argument, if it succeeds at all, only refutes one version
of functionalism, namely the Functional State Identity Theory.  Other
varieties of functionalism, e.g. supervenient functionalism and
functionalism about psychological explanation, are unaffected.  Your
identification of functionalism solely with FSIT indicates your naivete
in this area.  From context, it should have been quite clear that I
was invoking supervenient functionalism, not FSIT.

>I replied then what I will elaborate now: in principle and by definition,
>an anomalous connection will give you no regular, rule-determinable
>regularity (`nomos' is Greek for law or convention) between brain-states
>and mental states; and the lack of such regularity will prevent you from
>circumscribing "the supervenience base" (i.e. a well-defined set of brain
>states) of *any* given instance of a calculation.
>
>Note that my claim, in granting your assumption of supervenience of mental
>states on brain states, but denying the existence of type-type laws, denies
>the possibility that you may identify not only a correspondence between
>token-states of brain activity and any given token- or type-state of mental
>activity, but also the possibility of estabilishing such a correspondence
>between any class of token-states of brain (think of the meaning of `type')
>and any given mental state.  "Just take the entire brain" all you want; the
>point is that without a nomological connection you simply can't tell what
>sets of "entire brain" states are responsible for a given state of mind.

OK, it looks like it's time for a tutorial.  First let's make the obvious
distinction between "strong types" and "weak types".  A strong type is a
class of mental or physical states subsumed under the usual categories
from mental or physical vocabulary -- e.g. "belief that P" or "C-fibres
firing".  A weak type is any class of mental or physical states at all,
subject only to the condition that it must classify indistinguishable
state-tokens together.  So "mental state indistinguishable from the one
I'm in now" is a weak type but not a strong type, and similarly for
"brain state indistinguishable from the one I'm in now".  Let's call
a "strong nomological connection" a nomological connection between
strong brain-state types and strong mental-state types, and similarly
for a "weak nomological connection" 

Now, there are plenty of good reasons why there may be no strong
psychophysical nomological connections.  Multiple realizability
shows that there can't be any simple bidirectional connections,
and Davidson's subtler arguments, if correct, suggest that there
may not even be unidirectional connections, due to the differing
commitments of the mental and physical vocabularies.  On the other
hand, to speak of supervenience without weak nomological connections
is incoherent.  Therefore, by the principle of charity, when you
spoke of supervenience without nomological connections I naturally
assumed you were talking about strong types.  One doesn't assume
one's correspondent is uttering a contradiction in terms unless
one is forced to.

Above you make it clear, however, that you are talking about weak
types; and indeed it is true that if there were no weak nomological
connections, AI would be in a lot of trouble.  However, supervenience
without weak nomological connections is incoherent.  Recall that we're
assuming the supervenience of mental states on brain states (whether
or not this assumption is true is irrelevant) to see what follows.
Following Davidson's characterization of supervenience (e.g. "The
Material Mind", p. 250 in _Essays on Actions and Events_), this means
that it is impossible for two objects with indistinguishable brain
states to differ in their mental states.  Rephrasing: *necessarily*,
two objects with indistinguishable brain states have indistinguishable
mental states.  ("Indistinguishable" here is not an epistemic property;
it simply means differing in no physical characteristic (for brain
states) or mental characteristic (for mental states).)

Note the modal operator above.  It doesn't matter whether this
necessity is conceptual, metaphysical, or nomological, as all of
these imply nomological necessity.  So: let us take any given
brain-state token b (say, that of George Bush at 6pm 12/24/91), and
the associated mental-state token m.  Let B and M be the corresponding
weak brain-state and mental-state types (consisting of the classes of
brain-state and mental-state tokens indistinguishable from b and m
respectively).  It follows from the above that it is nomologically
necessary that any two objects that instantiate B also instantiate
M.  i.e. "if B then M" is a weak nomological connection of precisely
the kind whose existence you deny.

So: nomological connections between weak brain-state and mental-state
types follow from the very meaning of the claim that mental states
supervene on brain states.  Furthermore, this inference is essentially
trivial.  I therefore conclude, as before, that either you fail to
understand the meaning of "supervenience", or you lack the ability to
make trivial inferences.

>No: if the system is a *correct* implementation, then it has the right
>causal structure.  Now define correctness in a non-question-begging way.

"Correct implementation"?  An "incorrect implementation" is simply not
an implementation, any more than a paper-shredder is.

>I repeat: what is implementation?

There are many different ways in which one can define implementation,
but they are all relevantly similar in kind.  Start with FSA's.  Take
a simple FSA "program", e.g. "S1->S2, S2->S3, S3->S1" (I leave aside
inputs and outputs for simplicity; they are treated in a similar
fashion).  Then a physical system implements this FSA iff there is
a partitioning of its states into 3 disjoint classes s1, s2, s3, such
that its being in s1 causes it to go into s2, and so on.  (Other 
restrictions may be added, but this part is the core.)

The story for Turing machines is similar, though I won't spell out
the details -- it simply involves ensuring a mapping from tape-states
and possibly head-states to states of the physical system so that
the state-transitions come out right.  The story for C programs is
more complex still, but still similar in kind.  Each case involves
a mapping from abstract states to physical states, and a requirement
that the causal relations between the physical states satisfy certain
conditions.  There's really nothing particularly difficult about this,
so I don't see why you have such trouble with the notion.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


