From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl Tue Jan 28 12:15:11 EST 1992
Article 2970 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence Testing
Message-ID: <1992Jan21.210028.6756@oracorp.com>
Organization: ORA Corporation
Date: Tue, 21 Jan 1992 21:00:28 GMT

Jeff Dalton writes: (in response to Marvin Minsky)

> This whole point [whether a table-lookup program can be conscious]
> relies on "conscious processing" being something mysterious so that,
> for all we know, our coffee cups are conscious and just pretending
> not to be. Or some very simple program, nowhere near passing the
> Turing Test, is conscious, just not very bright.

That paragraph does not make a bit of sense to me. I take the
behaviorist Strong AI position as implying that there is *not*
anything intrinsically mysterious about consciousness, that there is
*not* any reason to suspect that coffee cups are conscious.

If you think it is ridiculous to assume that a coffee cup is
conscious, good for you. The Turing Test gives the same conclusion.

> If you want someone to prove it's not conscious, then maybe you're
> right: no one could do it (yet).  But suppose we eventually have
> a computational theory of mind that we think is adequate. How
> can you be so sure it will say table lookup is conscious?

Who said anything about being sure? If in the future someone thinks of
a convincing argument that the table lookup is not conscious, then
people will agree that the Turing Test is not sufficient to
demonstrate consciousness.

> If that's all that's required, then let's start dancing in the
> streets.  No need to do all that hard cog sci work - just imagine
> a very large table and wait for the hardware guys to catch up.

That also doesn't make a bit of sense to me. The proof that something
is, in principle, possible is not the same thing as achieving it (nor
is it the same as knowing *how* to achieve it). Many people believe
that controlled nuclear fustion, interstellar travel, and a cure for
AIDS are possible in principle, but that doesn't mean we are anywhere
close to achieving any of those things.

More to the point, there is a simple proof that in the game of chess
either there exists an algorithm that can force at least a tie for
white in chess, or there exists an algorithm that can force at least a
tie for black. Does that mean that the question of how to implement a
good chess program has been solved?

Daryl McCullough
ORA Corp.
Ithaca, NY


