From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Thu Jan 16 17:19:23 EST 1992
Article 2611 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2611 sci.philosophy.tech:1788
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Causes and Reasons
Message-ID: <1992Jan10.004011.23299@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1991Dec25.015221.6911@husc3.harvard.edu> <1991Dec28.221923.17443@bronze.ucs.indiana.edu> <1992Jan6.001554.7136@husc3.harvard.edu>
Date: Fri, 10 Jan 92 00:40:11 GMT
Lines: 71

In article <1992Jan6.001554.7136@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>Objection: in the absense of a nomological connection you are not justified
>in referring to state S, -- consider that the mental state M may be
>realized by infinitely many computational states {S: P(S)}, with P located
>arbitrarily high in the arithmetic (or even the analytic) hierarchy.

An interesting point, which shows up an ambiguity in talk of "supervening
on computational state" -- the class of computational states, unlike that
of physical states, is not closed under infinite conjunction.  So while
there's no ambiguity in talk of supervenience on physical state -- as "same
physical state => same mental state" and "same physical states =>
same mental states" come to the same thing, the same isn't true of
computational supervenience.  I've adopted the "same computational state
=> same mental state" reading.  If one adopted the second reading, then
one would have to allow that each of the infinite number of computational
states that a given system realized could be relevant to the determination
relation.  I'm fairly confident that Putnam meant something closer to the
first, but it's difficult to say for sure due to his very brief treatment.
In any case, even under the second reading, the failure of strong AI is not
implied, even in conjunction with the lack of type identities; it's simply
not ruled out, as it is under the first.

>I guess your saying that the thesis of supervenience makes no epistemic
>claims constitutes a retraction of your earlier claim that "supervenience
>without weak nomological connections is incoherent", or that "nomological
>connections between weak brain-state and mental-state types follow from the
>very meaning of the claim that mental states supervene on brain states",

No, as the notion of nomological necessity that I use is not an
epistemological one.  Nomological necessity simply requires a regularity
that carries appropriate counterfactual force.  Perhaps this is a simple
terminological difference; in any case, it's not relevant to the substantive
point under discussion.

>Does "strong AI", especially as characterized by
>Searle, make any epistemic claims?  Well, Searle writes: "One could
>summarize this view -- I call it `strong artificial intelligence', or
>`strong AI' -- by saying that the mind is to the brain, as the program is
>to the computer hardware."

Searle defines "strong AI" in different ways at different times.  However,
the definition he keeps coming back to is the claim that "an appropriately
programmed computer would literally *have* a mind" (in virtue of
implementing the appropriate program).  This is the only claim which I have
any interest in defending; furthermore, it's the claim that almost all of
Searle's arguments are concerned to refute.  The "program/hardware" claim
quoted above is far too loose to defend, and I probably don't believe it in
any case.  This is very clear in the article that started this discussion.

>Incidentally, would you care to explain how flawless
>performance could be modeled without modeling prescriptive inductive
>competence, assuming that you could suspend your disbelief in the latter?

I don't have any stake in modeling "flawless" performance.  I'm not at all
sure that it's possible.  It's certainly not required for the success of AI.

>Very well.  Am I allowed to conclude that you are retracting your earlier
>claims that "programs are a way of formally specifying causal structures",
>and that "physical systems which implement a given program *have* that
>causal structure, physically", given that the burden of determining the
>referent of the demonstrative pronoun (`that') falls not on the programmer,
>but on the engineer in charge of the program's implementation?

No.  As I've made clear a number of times, the role of the engineer is
essentially trivial as long as the notion of implementation is determinate.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


