From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Jan 21 09:26:26 EST 1992
Article 2805 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Searle Agrees with Strong AI?
Message-ID: <1992Jan16.220144.8148@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan16.054716.14332@oracorp.com>
Date: Thu, 16 Jan 92 22:01:44 GMT
Lines: 67

In article <1992Jan16.054716.14332@oracorp.com> daryl@oracorp.com writes:

>Strong AI is simply the claim that a machine with the right behavior
>must, therefore understand, which is logically equivalent to the claim
>that "correct behavior is not possible without understanding". So if
>you believe that correct behavior is not possible without understanding,
>then that justifies concentration on behavior, and not inner processes,
>intentionality, or whatever, because all those things are implied by
>having the right behavior.

This (1) mischaracterizes strong AI (which is not committed to there
being behavioural criteria for understanding), and (2) conflates logical
impossibility with empirical impossibility.  One has to distinguish the
claims that the right behaviour logically entails understanding (which
is the view that you're attributing (falsely) to strong AI), from
the claim that if the right behaviour occurs, then it's empirically
necessary that it will always be accompanied by understanding (which
is the view that is being imputed to Searle, again falsely I think).

To set some possible positions out, consider the claims:

(1) The right behaviour logically implies mentality.
(2) The right behaviour empirically "implies" mentality.
(3) Implementing the right program logically implies mentality.
(4) Implementing the right program empirically implies mentality.

Strong AI, I take it, consists in claim 4.  I am incidentally taking
mentality in it's strongest form, to include consciousness and so
on.  The positions on limited forms of mentality (e.g. belief, which
need not be conscious, might be different).

Some possible positions:
Searle denies 1, 2, 3, 4.
I deny 1, 2, and 3 but accept 4.
Many strong AI advocates (Minsky?) accept 3 and 4, but deny 1 and 2.
Some others accept 1, 2, 3, and 4.

There are various other combinations possible (presumably the logical
behaviourists accepted 1 and 2 without being committed to 3 or 4;
analytic functionalists (Lewis/Armstrong/Dennett style) accept 3 and 4
without necessarily accepting 1 and 2).  I could also add the
corresponding claims 5 and 6 about the implications between *brains*
and mentality, but four is enough for now.

I personally think that the case against 1 and 2 is made compellingly
by the example of the giant lookup table -- a ridiculous example,
impossible in practice but not in principle, but enough to make the
case.  I think that it's like that any reasonable-in-practice
mechanisms that has the right behaviour will have mentality, however.

As for 3, this just seems obviously false to me.  I can't even see
how having a *brain* physically identical to mine logically implies
consciousness (although it may imply more limited mental states, such
as beliefs).  It seems perfectly conceptually coherent that one could
have such a brain without any subjective experience at all.  But
having a brain physically identical to mine presumably empirically
implies the presence of consciousness.  The only tenable way
to accept 3, it seems to me, is to deny that "consciousness" or
"subjective experience" refer to anything real, and to argue that
they simply represent conceptual confusions.  (This is commonly
known as the argument from feigned anesthesia, but this may be a 
little unfair.)

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


