From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!psinntp!scylla!daryl Tue Jan 21 09:27:24 EST 1992
Article 2915 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence Testing
Message-ID: <1992Jan20.161508.11719@oracorp.com>
Organization: ORA Corporation
Date: Mon, 20 Jan 1992 16:15:08 GMT

David Chalmers writes:

> >> (1) The right behaviour logically implies mentality.
> >> (2) The right behaviour empirically "implies" mentality.
> >> (3) Implementing the right program logically implies mentality.
> >> (4) Implementing the right program empirically implies mentality.

> >I don't understand these four alternatives. First of all, my
> >understanding of "A logically implies B" is that such statements are
> >only true if (a) there is some generally-accepted theory from which
> >you can logically deduce B from A, or (b) it is true by definition,
> >such as "Being a bachelor logically implies being unmarried". There
> >are no such definitions or theories, so it seems to me that 1 & 3 are
> >impossible.

> What I'm talking about is truth by conceptual necessity -- i.e.
> propositions such that denial thereof would imply misuse of the
> concepts involved.

This is what I mean by saying "true by definition".

> Maybe you're invoking the Quinean denial of the analytic/synthetic
> distinction,

No, I agree that there is a distinction between analytic and synthetic
statements, and I think that "A logically implies B" means that "A
implies B" is analytic.

> If you really think that 1 and 3 are out of bounds immediately, you'll
> have a lot of arguing to do.  Any number of philosophers, from Ryle
> though Lewis and Armstrong to Dennett have thought that the concept
> of mentality is such that something like 3 (in Ryle's and maybe
> Dennett's case, 1 also) holds.  Maybe they were wrong, but they weren't
> stupid.

I would say that they are neither wrong, nor stupid, but that they are
assuming a more precise meaning to "mentality" than we are able to
agree on in this newsgroup.

> I recommend Lewis's "Psychophysical and theoretical identifications",
> Australasian Journal of Philosophy 50:249-58, 1972, for such a
> conceptual analysis of mentality.  I also recommend Horgan's
> "Supervenience and cosmic hermeneutics", Southern Journal of Philosophy
> Supplement 22:19-38, 1984, for a nicely argued case that *all* facts
> follow from physical facts via conceptual necessity.

That *can't* be the case for statements involving words without
agreed-upon meanings.

> >Also, I don't see the difference between 2 & 4. A program, to me, is a
> >specification of behavior, so I don't see how "behavior implies
> >mentality" is any different from "program implies mentality".
> 
> Well, this is simply false, I think.  Consider the two programs:
> 
> 1. print "1"                    2. for i:=1 to 6 do
>    print "2"                          if 6 mod i = 0 then print (i);
>    print "3"
>    print "6"
> 
> These have the same behaviour, but different programs.  A program
> doesn't just specify behaviour, it puts strong constraints on how
> that behaviour comes about.

I agree that there can be more than one program with the same
behavior, but I don't see why you would consider one to be capable of
understanding, and the other not. The program is a way of telling the
computer what to do, and I don't see how "mentality" can be attributed
to the way I tell it. You yourself have said that mentality is not in
the program, it is in the implementation.

> >I agree that the giant lookup table is ridiculous as a way to
> >implement AI, but I don't understand why it is so obvious that such an
> >implementation would lack mentality. Your answer might be that it
> >would lack the internal states that real minds have, but I don't even
> >grant that: in the case of the lookup table, the internal state would
> >be coded as a location in the lookup table.
> 
> It's not that it has no internal states, the problem is more that it has
> trivial internal states, with utterly uninteresting causation going on
> between one statement and the next.

I think this is very strange. What makes a state trivial or not trivial?
And what makes one "causation" uninteresting and another interesting? The
goal of Turing's invention of "Turing machines" was to reduce the behavior
of complex machines to a repeated sequence of entirely trivial operations:
(1) look at current symbol, X.
(2) examine current state, S.
(3) look up in the table of all possible transitions a transition starting
in state S observing the symbol X.
(4) depending on that transition, either move left, move right, or overwrite
the current symbol, and move to the appropriate new state. Go to (1).

This is, off the top of my head, the operation of a Turing machine.
The transitions are utterly trivial, exactly comparable to the
transitions of the lookup-table machine. All the complexity is in the
table of state transitions, and in the initial tape. The table of transitions
corresponds to the table in my thought-experiment, and the tape corresponds
to the conversation.

So if you are going to deny that the table lookup machine is capable
of understanding because of its triviality, would you also deny that
a Turing machine is capable of understanding?

> Furthermore, the content of each statement that the system utters
> could be arbitrarily changed without affecting the causal structure
> of the system at all.

I still don't know what you mean by "causal structure" here. Certainly
changing the database of statements will alter what statements are
"caused" by what inputs. If you mean at the level of machine
processing, I agree, the machine works the same, regardless of what
the database says.  However, the same thing is true of computers; the
CPU mindlessly performs fetch, store, branch and arithmetic
operations, and couldn't care less whether it is executing an AI
program or a random sequence of bytes. The intelligence (if there is
any) is not in the CPU, just as it is not in the lookup machinery.

> Of course, I can't *prove* that such a system lacks consciousness,
> any more than I could prove that a rock lacks consciousness.

I don't care about proof, I would like to see some plausibility
argument. Why is the table-lookup consciousness any less plausible
than computer program consciousness? (As I said, I don't think that
what a CPU is doing is any less trivial than a table lookup.)

> But it certainly seems deeply implausible. For an in-depth argument
> about why a look-up table would lack mentality, see Block, "Psychologism
> and behaviorism", Philosophical Review 90:5-43, 1981.

Okay.

Daryl McCullough
ORA Corp.
Ithaca, NY



