From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Jan  6 10:29:55 EST 1992
Article 2428 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2428 sci.philosophy.tech:1649
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Causes and Reasons
Message-ID: <1991Dec28.221923.17443@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1991Dec24.014716.6901@husc3.harvard.edu> <1991Dec25.042628.18737@bronze.ucs.indiana.edu> <1991Dec25.015221.6911@husc3.harvard.edu>
Date: Sat, 28 Dec 91 22:19:23 GMT
Lines: 104

In article <1991Dec25.015221.6911@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>We shall see.  At this point I would be most happy to elicit your
>commitment to the heuristic search for truth of the matter, rather than an
>eristic confrontation.  If you could bring yourself "to be more pleased to
>be refuted than to refute -- as much more as being rid oneself of the
>greatest evil is better than ridding another of it" ("Gorgias" 458B), this
>conversation would be much more productive for both of us.

I'm most interested in attempts to refute any substantive thesis that I
hold.  However, I'm yet to be convinced that the thesis I'm defending here,
i.e. that type identities are not necessary for strong AI, but that
supervenience on computational states is sufficient, is anything other than
trivial.  I'm not entirely sure how we've managed to spend so many words on
it.

>From the remainder of your post, it is clear that you have a more
restricted notion of nomological necessity than I do.  Rather than getting
into a discussion of modality that would take us far afield, however,
I'll just cast the argument in a way that leaves modality-talk out of it.

Premise: Mental states are supervenient on computational states.

Now, talk of computational states is somewhat vague, but from Putnam's
other writing we can take it that he is referring either to states of
probabilistic automata or of Turing machines.  We'll take the latter,
though it doesn't matter much for these purposes (anyone who finds
probabilistic FSAs more realistic can recast the discussion
straightforwardly).

So we can paraphrase the above claim as something like: when a human
is in a mental state M, then that human is a realization of a Turing
Machine T in state S, such that any physical system that realizes T in
state S will be in mental state M.

(This is reducing supervenience to a determination claim and a dependency
claim, in the common fashion.  The determination claim (the latter half)
is straightforward.  The dependency claim (the first half) is not always
part of supervenience, but in this case we can take it that this follows
from Putnam's phrasing ("mental states are supervenience on *our*
computational states", i.e. we actually realize certain computational
states on which our mental states supervene).)

So, given that there exists at least one human in a mental state (e.g.
understanding): it follows that there exists a Turing machine such that
any system that realizes that Turing machine (in the appropriate state)
possesses that mental state.  This is precisely "strong AI" as
characterized by Searle.

Note that epistemelogical points are entirely irrelevant.  Neither
supervenience nor "strong AI" makes any epistemic claim.  Perhaps
this is one source of the length of this discussion.  I have at no
stage been trying to argue for any epistemic claim, e.g. to the effect
that we could know which computational states our mental states emerge
from; although as a matter of fact I believe this claim and haven't seen
any good arguments against it.  However, it is certainly true that
supervenience alone would not suffice to establish this claim.

Some loose ends:

>However, references to literature are always welcome; I would
>particularly appreciate them in this case.

The "strong/weak type" terminology was invented by me on the spot
to capture an obvious distinction that usually seems to go nameless.

>Now for some references.  You will undoubtedly scoff once again at a second
>reference to the 1989 "Mind" paper by McGinn, reprinted as the first
>chapter of "The Problem of Consciousness".  All's the pity: the same kind
>of argument, made *more geometrico* can be found in a 1985 "Erkenntnis"
>paper by Putnam, not surprisingly, referenced on pp. xv and 118 of
>"Representation and Reality".  Read it and weep.

As you know, I find McGinn's argument entirely unconvincing, but in any
case he is making at most an epistemological point, and one that he concedes
is compatible with the truth of strong AI.  I like Putnam's paper more, but
it only applies to idealized "prescriptive inductive competences",
which I don't believe in; and if I did, I'd probably be happy with the idea
that they are non-recursive.  AI doesn't need to model this "competence"
to succeed -- performance is quite enough.  Finally, this too is at most an
epistemological point, so it doesn't count against the truth of strong AI.

>>There are many different ways in which one can define implementation,
>>but they are all relevantly similar in kind.
>
>Once again, I would appreciate references for all definitions.

I don't have a whole lot of references on this handy, but Putnam
gives this sort of definition in various papers (e.g. "The nature
of mental states"), though he talks about "realization" or "description"
rather than implementation.  See also Lycan's "Mental states and Putnam's
functionalist hypothesis", Aust J Phil 52:48-62, 1974.

>Whence my earlier conclusion: your notion of implementation is
>doing the work of stipulating the causal structure of the physical system;
>the program has very little say in it.

As I've said all along: without the notion of implementation, the program
comes to nothing.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


