From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spdcc!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Thu Jan 16 17:20:14 EST 1992
Article 2696 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2696 sci.philosophy.tech:1835
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spdcc!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <1992Jan14.004439.7502@husc3.harvard.edu>
Date: 14 Jan 92 05:44:36 GMT
References: <1991Dec28.221923.17443@bronze.ucs.indiana.edu> 
 <1992Jan6.001554.7136@husc3.harvard.edu> <1992Jan10.004011.23299@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 160
Nntp-Posting-Host: zariski.harvard.edu

In article <1992Jan10.004011.23299@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1992Jan6.001554.7136@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>Objection: in the absense of a nomological connection you are not justified
>>in referring to state S, -- consider that the mental state M may be
>>realized by infinitely many computational states {S: P(S)}, with P located
>>arbitrarily high in the arithmetic (or even the analytic) hierarchy.

DC:
>An interesting point, which shows up an ambiguity in talk of "supervening
>on computational state" -- the class of computational states, unlike that
>of physical states, is not closed under infinite conjunction.  So while
>there's no ambiguity in talk of supervenience on physical state -- as "same
>physical state => same mental state" and "same physical states =>
>same mental states" come to the same thing, the same isn't true of
>computational supervenience.  I've adopted the "same computational state
>=> same mental state" reading.  If one adopted the second reading, then
>one would have to allow that each of the infinite number of computational
>states that a given system realized could be relevant to the determination
>relation.  I'm fairly confident that Putnam meant something closer to the
>first, but it's difficult to say for sure due to his very brief treatment.
>In any case, even under the second reading, the failure of strong AI is not
>implied, even in conjunction with the lack of type identities; it's simply
>not ruled out, as it is under the first.

If I wanted Putnam's opinion on the subject, I would have asked him.  My
understanding is that our disagreement is related to a substantive issue,
rather than to a question of hermeneutics.  The issue in question is
whether the thesis of mental states' supervenience on computational states
is sufficient for ensuring the success of strong AI.  My counterexample
above demonstrates that this is not the case.  Please note that, should the
above situation obtain, the possibility of AI is indeed ruled out, as in
the general case it would be impossible to control the mental states of
your machine by programming its computational states.

MZ:
>>I guess your saying that the thesis of supervenience makes no epistemic
>>claims constitutes a retraction of your earlier claim that "supervenience
>>without weak nomological connections is incoherent", or that "nomological
>>connections between weak brain-state and mental-state types follow from the
>>very meaning of the claim that mental states supervene on brain states",

DC:
>No, as the notion of nomological necessity that I use is not an
>epistemological one.  Nomological necessity simply requires a regularity
>that carries appropriate counterfactual force.  Perhaps this is a simple
>terminological difference; in any case, it's not relevant to the substantive
>point under discussion.

THis is hardly a terminological difference, and you are altogether wrong.
The thesis of anomalous monism "denies that there can be strict laws
connecting the mental and the physical" (Davidson, p.212); in other words,
it's an even stronger claim than the one I presented above.  So the
regularity in question must be _lawlike_; and if this isn't an epistemic
criterion, I don't know what is.

MZ:
>>Does "strong AI", especially as characterized by
>>Searle, make any epistemic claims?  Well, Searle writes: "One could
>>summarize this view -- I call it `strong artificial intelligence', or
>>`strong AI' -- by saying that the mind is to the brain, as the program is
>>to the computer hardware."

DC:
>Searle defines "strong AI" in different ways at different times.  However,
>the definition he keeps coming back to is the claim that "an appropriately
>programmed computer would literally *have* a mind" (in virtue of
>implementing the appropriate program).  This is the only claim which I have
>any interest in defending; furthermore, it's the claim that almost all of
>Searle's arguments are concerned to refute.  The "program/hardware" claim
>quoted above is far too loose to defend, and I probably don't believe it in
>any case.  This is very clear in the article that started this discussion.

Very well.  So far my point is that you need nomological monism, or at
least nomological supervenient functionalism, in order to defend your
claim; and this sort of connection, as was noted earlier, falls victim to
Putnam's argument in "Representation and Reality".

MZ:
>>Incidentally, would you care to explain how flawless
>>performance could be modeled without modeling prescriptive inductive
>>competence, assuming that you could suspend your disbelief in the latter?

DC:
>I don't have any stake in modeling "flawless" performance.  I'm not at all
>sure that it's possible.  It's certainly not required for the success of AI.

By `flawless', I simply mean indistinguishable from human.

MZ:
>>Very well.  Am I allowed to conclude that you are retracting your earlier
>>claims that "programs are a way of formally specifying causal structures",
>>and that "physical systems which implement a given program *have* that
>>causal structure, physically", given that the burden of determining the
>>referent of the demonstrative pronoun (`that') falls not on the programmer,
>>but on the engineer in charge of the program's implementation?

DC:
>No.  As I've made clear a number of times, the role of the engineer is
>essentially trivial as long as the notion of implementation is determinate.

And, as I've made clear a number of times, the role of the engineer can't
be trivial as long as the notion of implementation is defined in terms of
an isomorphism between the logical structure of the program, and the causal
structure of the machine implementing it.

In article <1991Dec25.042628.18737@bronze.ucs.indiana.edu>
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

DC:
>There are many different ways in which one can define implementation,
>but they are all relevantly similar in kind.  Start with FSA's.  Take
>a simple FSA "program", e.g. "S1->S2, S2->S3, S3->S1" (I leave aside
>inputs and outputs for simplicity; they are treated in a similar
>fashion).  Then a physical system implements this FSA iff there is
>a partitioning of its states into 3 disjoint classes s1, s2, s3, such   
>that its being in s1 causes it to go into s2, and so on.  (Other
>restrictions may be added, but this part is the core.)

To reiterate (please try to address my point this time around): note that
you are defining an isomorphism between causes and reasons, i.e. between
the physical structure of the system and the logical structure of the FSA.
(Recall the distinction between causes and reasons made by Schopenhauer.)
Now, given that my earlier thesis of intensionality of physical laws with
respect to the laws of logic is both incontrovertible and largely
uncontroversial (consider the failure of logicism; if mathematics is not
reducible to logic, then, a fortiori, neither is physics; see the
discussion of subject reduction in Popper & Eccles, "The Self and Its
Brain", pp. 16--21), all that you can get in practice is a homomorphism
from the former to the latter.  Whence my earlier conclusion: your notion
of implementation is doing the work of stipulating the causal structure of
the physical system; the program has very little say in it.  In other
words, the program itself is incapable of formalizing the causal structure
of the machine executing it, since the burden of determining this structure
is borne by whoever is ensuring the correctness of the implementation, and
since equally correct implementations may in practice result in different
causal structures.


>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


