From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!ncar!noao!arizona!gudeman Sun Dec  1 13:06:04 EST 1991
Article 1691 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1189 comp.ai.philosophy:1691
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Awareness
Message-ID: <10028@optima.cs.arizona.edu>
Date: 27 Nov 91 20:18:25 GMT
Sender: news@cs.arizona.edu
Followup-To: sci.philosophy.tech
Lines: 75

After I posted my question about whether a table-lookup program could
be considered "aware" because it passed the Turing test, I got a
couple of replies by mail by people who answered that it _is_ aware.
Although I admire their motivation to avoid adding more bandwidth to a
topic that already has generated a lot, I think that this answer
deserves detailed consideration in a posting, since others may have
answered the same way.

Let me ask this: if the machine is aware, then does it have rights?
If I took an axe and smashed the machine and the disk containing the
table, would I be committing murder?  Should the machine be allowed to
vote? (If so, what's to prevent me from creating a hundred million
such machines that just happen to want me as president)?  On a less
extreme note: if the machine responded to one of your statements in
such a way that you seemed to have hurt its feelings, would you feel
badly about it (knowing that you were talking to a table-lookup
program)?

Unless you aswered "yes" to each of the above questions, then you
either do not really believe the machine is aware, or you have a
callous --even brutal-- disregard for the rights and sensibilities of
another conscious being.

If you claim that such a thing is "aware", then you are almost
certainly using the term differently from the way I intend it (unless
you are a mystic who imputes awareness to all objects).  When I say
"aware", I am not using the word in the sense that one might say that
a rock is aware.  That sense of the word is an analogy of sorts for
people in general don't really believe that the rock actually feels,
wants, or introspects --rather you say that the rock is aware simply
because it "responds" to its environment, it is effected.

But this is a very poor thing compared to what humans experience, as a
moment of introspection will inform you.  When you see an object, you
don't merely react to the object as a rock would "react" to gravity by
falling.  Rather, you are aware that you are aware.  You feel, want,
and introspect.  You experience something in a personal sense that,
presumably, merely physical objects do not experience.  No fair
claiming that the rock is different from the computer unless you can
explain what the differenc is _and_ how that difference can lead to
human-like awareness.

Harley Davis asks the question of whether we should treat artificial
creatures as moral agents.  A good question.  My answer is that as
long as I can explain its actions as a purely physical chain of causes
and effects, then I have no reason to suppose that there is anything
more going on.

I know with absolute certainty that there is "more going on" in my own
mind.  I believe that there is "more going on" in other humans, but
not just because they pass the Turing test --for there are many people
who could not pass the Turing test due to mental disorders (or perhaps
even due just to social differences).

Suppose that I designed a computer that could pass the Turing test as
an idiot.  Or suppose that it passed the Turing test as a person with
a very poor understanding of the language being used.  Or supposed it
passed the test as an extremely overbearing person who completely
ignored any statement that strayed from the topic of medical
diagnosis.  Suppose the machine can fool children but not adults.
Suppose it can fool adults but not trained psychologists.  Suppose it
fools some percentage x of the testers and fails to fool the rest.
Just what is considered "passing the Turing test"?

And whatever level you put success at, why should I believe that
something unusual happened between that level and the next lower level
(or rather, that level minus epsilon)?  You see, the Turing test is
not a yes/no test, nor even a discrete one, it has a continuum of
possible results over several dimensions.  How can you pick an
arbitrary point in the result space and say that at this point the
machine is aware in the sense that a human is aware?
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


