From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uwm.edu!ogicse!emory!gwinnett!depsych!rc Tue Jan 21 09:27:12 EST 1992
Article 2893 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uwm.edu!ogicse!emory!gwinnett!depsych!rc
>From: rc@depsych.Gwinnett.COM (Richard Carlson)
Newsgroups: comp.ai.philosophy
Subject: Turing Test Finally Demystified
Message-ID: <2a3seB4w164w@depsych.Gwinnett.COM>
Date: 19 Jan 92 16:29:00 GMT
Article-I.D.: depsych.2a3seB4w164w
Lines: 200

David Gudeman writes:
>Given that so many people find the Turing test intuitively plausible
>and others find it intuitively implausible, I thought it might help to
>illuminate the topic if someone were to spell out the actual argument
>of the Turing test in a logical form.  Below, I will give what I think
>the argument is, and it should be obvious why I think the whole idea
>is illogical.  But perhaps there is some other, logical argument for
>the validity of the Turing test that I don't know.  Would anyone care
>to try?

Nope.  I think you got it all.  To relate your accomplishment to
another old thread, doesn't this reduction of the arguments of the
Turing Test to an Aristotelian syllogistic form illustrate a
comment Kant once made that there are very few problems of
philosophy which can't be completely cleared up by the simple
expedient of restating them in strict syllogistic form? Apparently
Aristotle's logic models the world so well that for most purposes,
even highly technical ones, it's all anyone needs. That seems to
relate to the old Nilges-Zeleny thread about plain, ordinary,
everyday (Aristotelian) logic vs. the high-toned, fancy schmancy,
supertorbocharged logic with all the latest bells and whistles
which Zeleny claims is desirable.  Doesn't this analysis of yours
score a few points for Mr. Nilges' side?

>In the following I am going to use the word "conscious" in the
>following sense: first, a conscious entity is one who is aware of
>existence in the sense that a human is.  This does not simply mean
>that the entity is a _part_ of existence (and therefor reacts
>according to natural laws), but that the entity can reflect on
>existence.  I do not require that this reflection be one of free will
>(it may be causally determined), but that the entity must reflect and
>be aware of reflecting.  This implies that the entity has beliefs, for
>you cannot reflect without at the very least believing that you are
>reflecting.
>
>Second, a conscious entity makes choices.  Thus, if the entity is
>reflecting on a specific element of existence, then the entity has
>chosen to reflect on that element.  I do not require that choices have
>causal force, only that it seems to the entity that the choice has
>causal force.

I think this is all pretty non-controversial.  And clearly stated.

>Third, a conscious entity is a moral agent in the sense that the
>entity has moral responsibilities and that other moral agents have
>responsibilities with respect to the entity.  Specifically, if you
>truly believe that an entity is conscious like a human, then either
>(1) you are a sociopath, or (2) you believe that it would be wrong for
>you to destroy the entity.  And furthermore, you would hold the entity
>personally responsible for any harm it deliberately did to you.  (Only
>conscious entities can do things deliberately).
>
>If you deny any of the above attributes of consciousness, then you are
>not talking about the same thing I am.  I'll talk about the
>operational definition of consciousness below.

I don't want to start another thread, but you've restated Kant's
argument (Kant again, maybe you're a Kantian 8^>) for one form of
the categorical imperative, the notion that you are in some sense
"logically" required to value another sentient entity.  I've never
understood the logical force of that argument.  Just because you
happen to be an entity similar to me I don't see why that
"logically" precludes my killing you, cooking you, and eating you
if I happen to be hungry  -- or even it I just happen to be tired
of eating beef, chicken and veal and want a bit of a change.  This
allegedly logical necessity seems to reduce to a strong form of
Humean "empathy" or "sympathy," which is a non-logical mechanism
that accomplishes pretty much the same end.  It would certainly be
odd if from the perspective of the late Twentieth Century we came
to see a near identity between the Kantian and the Humean
varieties of Enlightened thought, the one usually called
"rational" and the other "empirical," and considered by their
contemporaries to be radically different.  However, that was
really just an aside.  I think most sentient people (or other
beings) will accept your definitions more or less as you've laid
them out.

>A person who passes the Turing test demonstrates communicative
>behavior.  So I'm going to use "communicates" as an abbreviation for
>"passes the Turing Test", "communication" as "passing the Turing
>test", etc.  In the following, it means no more and no less than that.
>Please do not take this word out of context.
>
>The most basic argument, as I see it is
>
>(1a) Consciousness of an entity is a cause of communication in that entity.
>(1b) Computer X communicates.
>therefore
>(1c) Computer X was caused to communicate by consciousness in computer X.
>
>The fallaciousness of this argument should be apparent.  You cannot
>reason from an effect to a cause without ruling out all other causes.
>But suppose we change (1a) to

Aristotle's logic certainly seems to mirror reality very well
here.   A lot of people have been moving toward this formulation,
but looking at the skeleton of logic without the flesh of rhetoric
makes it so much clearer.

>(2a) Consciousness of an entity is the only known cause of
>     communication in that entity.
>
>and combine with (1b) to conclude
>
>(2c) Computer X was probably caused to communicate by consciousness in
>     computer X.
>
>This conclusion is safe as far as it goes, and is perfectly reasonable
>for the person taking the Turing test.  That is, the person who does
>not know the communicator is a computer (and does not know that
>computers can communicate).  But the outside observer can add the
>proposition
>
>(2d) Computer X is caused to communicate by the set of states it
>     passes though in virtue of its construction and programming.
>
>This statement immediately conflicts with (2a).  (I'll note in passing
>that this is the difference between applying the Turing test to a
>human and to a machine.  For a human nothing like (2d) applies unless
>you assume reductive materialism, and then it does not have the status
>of an observation).

That of course is what sets a real computer undergoing a Turing
Test apart from a sentient being in a Turing-Test-like situation.
But it is hard to think of the "computer" in quite this way.
Penrose, in describing the Turing Test, pictures a computer
confronted with a statement that "it" "suspects" may be
nonsensical to a human being familiar with everyday reality
breaking out into a sweat. Clearly he can't quite help seeing the
computer as more like a sentient non-human, say ALF, the furry
little alien that lived with the social worker and his family, or
Mr. Ed., Allan Young's talking horse, communicating over something
like a telephone line and trying to convince his interlocutor that
he is human.  Or even like a gay college boy surrounded by drunk,
homophobic jocks who will beat him up if they suspect he is not
exactly like them.

So we have to _force_ ourselves to take that logical point of
view.

>Now there are two solutions to this conflict.  The most obvious
>solution is to drop proposition (2a).  This is the logical thing to do
>when a belief (2a) conflicts with an observation (2d).  The result is
>an immediate abondment of the Turing test.

Not necessarily.  You could say that the computer simulates
(mimics) consciousness.

>Alternatively, you can strengthen (2a) into:
>
>(3a) Consciousness in an entity is the only possible cause of
>     communication in that entity.
>
>This is the only way to justify keeping it around in the face of (2d).
>The result is something like an axiom.  It is based on experience, to
>be sure, but it was not abandoned in the face of counter-experience so
>you can not claim that it is inductively generated.  The plausibility
>of strong AI seems to me to rest largely on accepting this axiom as it
>stands (or (4a) later) -- a move that I see no justification for.
>
>(4a) Consciousness in an entity always accompanies the cause of
>     communication in the entity.
>
>I suppose that this is intended to be more plausible than (3a);
>perhaps someone can explain why.  In any case, it still involves you
>in similar difficulties to (3a).  (4a) and (3a) also both imply
>determinism (the idea that free will is an illusion) which has its own
>problems.
>
>Some people when faced with these difficulties try to justify their
>belief in (3a) or (4a) by taking a behavioral definition of
>consciousness.  In other words, an entity is conscious if it behaves
>as though it were conscious.  The problem with this definition is that
>it changes the definition of conscious.  You have no way of knowing
>(without assuming (3a) or (4a)) that this behavior is associated in
>any way with what you experience as consciousness.

Psychologists, at one time the avatars par excellence of
"behaviorism," discussed similar issues some seventy years ago in
the case of supposing intelligence in animals, and "resolved" it
in a manner that was more or less the opposite of Turing's,
although various Turing-Test-like paradigms spring readily to
mind, and more or less like the Houdini Test for spiritualists:
they always looked for a specifiable stimulus that they could
presume to be responsible for the observed behavior, thus
returning to the Cartesian notion of animals as "machines."

>Would anyone care to construct a different argument for the validity
>of the Turing test, or try to justify (3a) or (4a)?

No.  But the Turing Test could still be a practical (as opposed to
theoretical) criterion to shoot for.  It would guide research by
giving a reasonable (although not "logical") goal for different
models (e.g., algorithmic vs. pattern recognition) to shoot at.

--
Richard Carlson        |    rc@depsych.gwinnett.COM
Midtown Medical Center |    {rutgers,ogicse,gatech}!emory!gwinnett!depsych!rc
Atlanta, Georgia       |
(404) 881-6877         |


