From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!psinntp!scylla!daryl Tue Jan 21 09:26:56 EST 1992
Article 2864 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan18.144220.11862@oracorp.com>
Date: 18 Jan 92 14:42:20 GMT
Organization: ORA Corporation
Lines: 45

David Chalmers writes:

> To set some possible positions out, consider the claims:

> (1) The right behaviour logically implies mentality.
> (2) The right behaviour empirically "implies" mentality.
> (3) Implementing the right program logically implies mentality.
> (4) Implementing the right program empirically implies mentality.

I don't understand these four alternatives. First of all, my
understanding of "A logically implies B" is that such statements are
only true if (a) there is some generally-accepted theory from which
you can logically deduce B from A, or (b) it is true by definition,
such as "Being a bachelor logically implies being unmarried". There
are no such definitions or theories, so it seems to me that 1 & 3 are
impossible.

Also, I don't see the difference between 2 & 4. A program, to me, is a
specification of behavior, so I don't see how "behavior implies
mentality" is any different from "program implies mentality". Perhaps
the difference is that a program can specify "internal" states and
transitions that have no affect on behavior? That is certainly possible,
but I'm not sure if I believe it. In my own introspection, I can't imagine
two noticably different "internal states" that have no possibility of
affecting my behavior.

> I personally think that the case against 1 and 2 is made compellingly
> by the example of the giant lookup table -- a ridiculous example,
> impossible in practice but not in principle, but enough to make the
> case.  I think that it's like that any reasonable-in-practice
> mechanisms that has the right behaviour will have mentality, however.

I agree that the giant lookup table is ridiculous as a way to
implement AI, but I don't understand why it is so obvious that such an
implementation would lack mentality. Your answer might be that it
would lack the internal states that real minds have, but I don't even
grant that: in the case of the lookup table, the internal state would
be coded as a location in the lookup table. It is certainly true that
this interpretation of internal state would not obey the same
transition rules as our own internal states, but what makes the one
"conscious processing" and the other not?

Daryl McCullough
ORA Corp.
Ithaca, NY


