From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!news.cs.indiana.edu!arizona.edu!arizona!gudeman Tue Jan 21 09:27:06 EST 1992
Article 2881 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1896 comp.ai.philosophy:2881
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!news.cs.indiana.edu!arizona.edu!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: The Turing Test Argument
Message-ID: <11629@optima.cs.arizona.edu>
Date: 18 Jan 92 22:07:41 GMT
Article-I.D.: optima.11629
Sender: news@cs.arizona.edu
Followup-To: sci.philosophy.tech
Lines: 153

Given that so many people find the Turing test intuitively plausible
and others find it intuitively implausible, I thought it might help to
illuminate the topic if someone were to spell out the actual argument
of the Turing test in a logical form.  Below, I will give what I think
the argument is, and it should be obvious why I think the whole idea
is illogical.  But perhaps there is some other, logical argument for
the validity of the Turing test that I don't know.  Would anyone care
to try?

In the following I am going to use the word "conscious" in the
following sense: first, a conscious entity is one who is aware of
existence in the sense that a human is.  This does not simply mean
that the entity is a _part_ of existence (and therefor reacts
according to natural laws), but that the entity can reflect on
existence.  I do not require that this reflection be one of free will
(it may be causally determined), but that the entity must reflect and
be aware of reflecting.  This implies that the entity has beliefs, for
you cannot reflect without at the very least believing that you are
reflecting.

Second, a conscious entity makes choices.  Thus, if the entity is
reflecting on a specific element of existence, then the entity has
chosen to reflect on that element.  I do not require that choices have
causal force, only that it seems to the entity that the choice has
causal force.

Third, a conscious entity is a moral agent in the sense that the
entity has moral responsibilities and that other moral agents have
responsibilities with respect to the entity.  Specifically, if you
truly believe that an entity is conscious like a human, then either
(1) you are a sociopath, or (2) you believe that it would be wrong for
you to destroy the entity.  And furthermore, you would hold the entity
personally responsible for any harm it deliberately did to you.  (Only
conscious entities can do things deliberately).

If you deny any of the above attributes of consciousness, then you are
not talking about the same thing I am.  I'll talk about the
operational definition of consciousness below.

A person who passes the Turing test demonstrates communicative
behavior.  So I'm going to use "communicates" as an abbreviation for
"passes the Turing Test", "communication" as "passing the Turing
test", etc.  In the following, it means no more and no less than that.
Please do not take this word out of context.

The most basic argument, as I see it is

(1a) Consciousness of an entity is a cause of communication in that entity.
(1b) Computer X communicates.
therefore
(1c) Computer X was caused to communicate by consciousness in computer X.

The fallaciousness of this argument should be apparent.  You cannot
reason from an effect to a cause without ruling out all other causes.
But suppose we change (1a) to

(2a) Consciousness of an entity is the only known cause of
     communication in that entity.

and combine with (1b) to conclude

(2c) Computer X was probably caused to communicate by consciousness in
     computer X.

This conclusion is safe as far as it goes, and is perfectly reasonable
for the person taking the Turing test.  That is, the person who does
not know the communicator is a computer (and does not know that
computers can communicate).  But the outside observer can add the
proposition

(2d) Computer X is caused to communicate by the set of states it
     passes though in virtue of its construction and programming.

This statement immediately conflicts with (2a).  (I'll note in passing
that this is the difference between applying the Turing test to a
human and to a machine.  For a human nothing like (2d) applies unless
you assume reductive materialism, and then it does not have the status
of an observation).

Now there are two solutions to this conflict.  The most obvious
solution is to drop proposition (2a).  This is the logical thing to do
when a belief (2a) conflicts with an observation (2d).  The result is
an immediate abondment of the Turing test.

Alternatively, you can strengthen (2a) into:

(3a) Consciousness in an entity is the only possible cause of
     communication in that entity.

This is the only way to justify keeping it around in the face of (2d).
The result is something like an axiom.  It is based on experience, to
be sure, but it was not abandoned in the face of counter-experience so
you can not claim that it is inductively generated.  The plausibility
of strong AI seems to me to rest largely on accepting this axiom as it
stands (or (4a) later) -- a move that I see no justification for.

In any case, it is now necessary to find some way to avoid the
conflict between (3a) and (2d).  One way is to say that consciousness
caused the computer states which in turn caused the communication, so
that the communication was caused indirectly by consciousness.  This
is trivially true in the sense that the computer was designed and
programmed by consciousness entities (or semi-consciousness if they
were true hackers).  But we are talking about consciousness _in the
computer_ as I made explicit in ([1-3]a), and this will not do for
that.

You cannot restrict this approach by saying that consiousness in the
computer caused the state, since the states of the computer were
caused by external influences.  There is no gap in the causal chain to
be explained, and so the insertion of another cause is indefensible.

The same thing happens if you claim that the states in the computer
caused consciousness which in turn caused the communication.  You can
claim that consciousness is some sort of proximate cause that must be
inserted between observed events to connect things.  But then you will
have to explain how the proximate cause of communication should be any
different from the proximate cause of anything else.

Finally, you can resolve the conflict between (2d) and (3a) by saying
that the two causes are the same thing.  Then you must explain why the
two sorts of things have such different properties.  Why does
consciousness appear to the consciousness entity so different from a
sequence of states?  Why can you not give even a hand-waving sketch of
a possible --no matter how implausible-- model of how consciousness
might be related in any way to a sequence of states?  What is so
beguiling about axiom (3a) as to prompt anyone to keep in in the face
of such difficulties?

Alternatively, you can retreat from (3a) to

(4a) Consciousness in an entity always accompanies the cause of
     communication in the entity.

I suppose that this is intended to be more plausible than (3a);
perhaps someone can explain why.  In any case, it still involves you
in similar difficulties to (3a).  (4a) and (3a) also both imply
determinism (the idea that free will is an illusion) which has its own
problems.

Some people when faced with these difficulties try to justify their
belief in (3a) or (4a) by taking a behavioral definition of
consciousness.  In other words, an entity is conscious if it behaves
as though it were conscious.  The problem with this definition is that
it changes the definition of conscious.  You have no way of knowing
(without assuming (3a) or (4a)) that this behavior is associated in
any way with what you experience as consciousness.

Would anyone care to construct a different argument for the validity
of the Turing test, or try to justify (3a) or (4a)?
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


