From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Mon Jan  6 10:30:37 EST 1992
Article 2504 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2504 sci.philosophy.tech:1722
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Summary: strong AI has to make *some* epistemic claims
Message-ID: <1992Jan6.001554.7136@husc3.harvard.edu>
Date: 6 Jan 92 05:15:51 GMT
Article-I.D.: husc3.1992Jan6.001554.7136
References: <1991Dec25.042628.18737@bronze.ucs.indiana.edu> 
 <1991Dec25.015221.6911@husc3.harvard.edu> <1991Dec28.221923.17443@bronze.ucs.indiana.edu>
Organization: Dada
Lines: 211
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec28.221923.17443@bronze.ucs.indiana.edu>
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1991Dec25.015221.6911@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>We shall see.  At this point I would be most happy to elicit your
>>commitment to the heuristic search for truth of the matter, rather than an
>>eristic confrontation.  If you could bring yourself "to be more pleased to
>>be refuted than to refute -- as much more as being rid oneself of the
>>greatest evil is better than ridding another of it" ("Gorgias" 458B), this
>>conversation would be much more productive for both of us.

DC:
>I'm most interested in attempts to refute any substantive thesis that I
>hold.  However, I'm yet to be convinced that the thesis I'm defending here,
>i.e. that type identities are not necessary for strong AI, but that
>supervenience on computational states is sufficient, is anything other than
>trivial.  I'm not entirely sure how we've managed to spend so many words on
>it.

Perhaps our conversation has been longer than it might have been because
you chose to espouse some additional theses, e.g. that strongly anomalous
supervenience makes no sense, or that programs can specify causal
structures.  

DC:
>From the remainder of your post, it is clear that you have a more
>restricted notion of nomological necessity than I do.  Rather than getting
>into a discussion of modality that would take us far afield, however,
>I'll just cast the argument in a way that leaves modality-talk out of it.

OK, though I seriously doubt that any talk of rules could be conducted
without any modality talk.

DC:
>Premise: Mental states are supervenient on computational states.

Granted for the sake of argument.

DC:
>Now, talk of computational states is somewhat vague, but from Putnam's
>other writing we can take it that he is referring either to states of
>probabilistic automata or of Turing machines.  We'll take the latter,
>though it doesn't matter much for these purposes (anyone who finds
>probabilistic FSAs more realistic can recast the discussion
>straightforwardly).

Agreed.

DC:
>So we can paraphrase the above claim as something like: when a human
>is in a mental state M, then that human is a realization of a Turing
>Machine T in state S, such that any physical system that realizes T in
>state S will be in mental state M.
>
>(This is reducing supervenience to a determination claim and a dependency
>claim, in the common fashion.  The determination claim (the latter half)
>is straightforward.  The dependency claim (the first half) is not always
>part of supervenience, but in this case we can take it that this follows
>from Putnam's phrasing ("mental states are supervenience on *our*
>computational states", i.e. we actually realize certain computational
>states on which our mental states supervene).)

Objection: in the absense of a nomological connection you are not justified
in referring to state S, -- consider that the mental state M may be
realized by infinitely many computational states {S: P(S)}, with P located
arbitrarily high in the arithmetic (or even the analytic) hierarchy.

DC:
>So, given that there exists at least one human in a mental state (e.g.
>understanding): it follows that there exists a Turing machine such that
>any system that realizes that Turing machine (in the appropriate state)
>possesses that mental state.  This is precisely "strong AI" as
>characterized by Searle.

If your error could be glossed over in the initial discussion, due to the
lack of the definite article in your discussion of "state S", it's quite
apparent in the above phrasing.  Without a nomological brain-mind
connection, you are not justified in referring to "the appropriate state";
without reference to the appropriate state, your argument has no force.
Please note that this point has been made earlier; this time I would like
to get an answer.

As for your notion of what constitutes "strong AI" as characterized by
Searle, more on that anon.

DC:
>Note that epistemelogical points are entirely irrelevant.  Neither
>supervenience nor "strong AI" makes any epistemic claim.  Perhaps
>this is one source of the length of this discussion.  I have at no
>stage been trying to argue for any epistemic claim, e.g. to the effect
>that we could know which computational states our mental states emerge
>from; although as a matter of fact I believe this claim and haven't seen
>any good arguments against it.  However, it is certainly true that
>supervenience alone would not suffice to establish this claim.

I guess your saying that the thesis of supervenience makes no epistemic
claims constitutes a retraction of your earlier claim that "supervenience
without weak nomological connections is incoherent", or that "nomological
connections between weak brain-state and mental-state types follow from the
very meaning of the claim that mental states supervene on brain states",
and that "furthermore, this inference is essentially trivial".  Very well;
now to your second claim.  Does "strong AI", especially as characterized by
Searle, make any epistemic claims?  Well, Searle writes: "One could
summarize this view -- I call it `strong artificial intelligence', or
`strong AI' -- by saying that the mind is to the brain, as the program is
to the computer hardware." (The Second Reith Lecture.)  Now, to say that is
certainly not to say that we could determine the identity of the program;
however it certainly implies that we can say certain things about its
structure.  

Indeed, as you would undoubtedly agree, to say that the brain has a
computational structure, is not to say anything terribly meaningful: most
processes can be interpreted as possessed of a computational structure.
The game of Life can be used to model the Universal Turing Machine, and I
have no doubt that so can Earth's biosphere.  The interesting claim is that
the mind-brain relation is analogous to that of software to hardware; once
you consider certain common properties of programs (e.g. finitude), you
will surely agree that this is indeed an epistemic claim, albeit not one
implying that the program in question be discoverable by human means.  In
short, I hereby claim that you still stand in need of postulating some sort
of nomological brain-mind connection, i.e. that supervenience alone just
won't do the trick.

DC:
>Some loose ends:

MZ:
>>However, references to literature are always welcome; I would
>>particularly appreciate them in this case.

DC:
>The "strong/weak type" terminology was invented by me on the spot
>to capture an obvious distinction that usually seems to go nameless.

Good for you.

MZ:
>>Now for some references.  You will undoubtedly scoff once again at a second
>>reference to the 1989 "Mind" paper by McGinn, reprinted as the first
>>chapter of "The Problem of Consciousness".  All's the pity: the same kind
>>of argument, made *more geometrico* can be found in a 1985 "Erkenntnis"
>>paper by Putnam, not surprisingly, referenced on pp. xv and 118 of
>>"Representation and Reality".  Read it and weep.

DC:
>As you know, I find McGinn's argument entirely unconvincing, but in any
>case he is making at most an epistemological point, and one that he concedes
>is compatible with the truth of strong AI.  I like Putnam's paper more, but
>it only applies to idealized "prescriptive inductive competences",
>which I don't believe in; and if I did, I'd probably be happy with the idea
>that they are non-recursive.  AI doesn't need to model this "competence"
>to succeed -- performance is quite enough.  Finally, this too is at most an
>epistemological point, so it doesn't count against the truth of strong AI.

If the software analogue of a mind is a program written in a Tarskian
infinitary language, I can't see the strong AI proponents being very happy.
Like it or not, but some epistemological points are going to be relevant to
this issue.  Incidentally, would you care to explain how flawless
performance could be modeled without modeling prescriptive inductive
competence, assuming that you could suspend your disbelief in the latter?

DC:
>>>There are many different ways in which one can define implementation,
>>>but they are all relevantly similar in kind.

MZ:
>>Once again, I would appreciate references for all definitions.

DC:
>I don't have a whole lot of references on this handy, but Putnam
>gives this sort of definition in various papers (e.g. "The nature
>of mental states"), though he talks about "realization" or "description"
>rather than implementation.  See also Lycan's "Mental states and Putnam's
>functionalist hypothesis", Aust J Phil 52:48-62, 1974.

Thank you.

MZ:
>>Whence my earlier conclusion: your notion of implementation is
>>doing the work of stipulating the causal structure of the physical system;
>>the program has very little say in it.

DC:
>As I've said all along: without the notion of implementation, the program
>comes to nothing.

Very well.  Am I allowed to conclude that you are retracting your earlier
claims that "programs are a way of formally specifying causal structures",
and that "physical systems which implement a given program *have* that
causal structure, physically", given that the burden of determining the
referent of the demonstrative pronoun (`that') falls not on the programmer,
but on the engineer in charge of the program's implementation?

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: "Today, decent Jews themselves recognize          Edward G. Nilges :
: "that their nation has become a land of madmen."                   :
:                                                           Harvard  :
: Mikhail Zeleny                                            probably :
: 872 Massachusetts Ave., Apt. 707                          doesn't  :
: Cambridge, Massachusetts 02139           (617) 661-8151   think    :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET   so       :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


