From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!think.com!snorkelwacker.mit.edu!news.media.mit.edu!nlc Mon May 25 14:05:19 EDT 1992
Article 5643 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!think.com!snorkelwacker.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Subject: Re: AI and morality
Message-ID: <1992May14.053930.22599@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <1992May12.091534.22317@norton.com> <1992May13.160622.13958@mp.cs.niu.edu> <1992May13.174643.17539@organpipe.uug.arizona.edu>
Date: Thu, 14 May 1992 05:39:30 GMT
Lines: 71

In article <1992May13.174643.17539@organpipe.uug.arizona.edu> bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:
>In article <1992May13.160622.13958@mp.cs.niu.edu> 
>rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>
>>  The RIGHT place to start thing about morality is in some newsgroup
>>OTHER THAN comp.ai.philosophy.
>>
>  I disagree.  The fundamental problem of philosophy is "What
>should I do now?".  It applies to AI in numerous ways.  One
>is, "Assuming that we *can* build AIs, should we?"; another,
>if the answer is yes, is, "What kind should we build?".
>
>  More technically, it is important to think about what
>morality is, in order to be able to build it into the
>AIs that we (eventually) create.
>
>	-- Bill

One of the questions I like to ask when i hear discussions about
giving rights to an animal, fetus, robot or something else is: Why
should we give rights to humans?  It is taken as a given that that
rights exist absolutely for humans and the arguments are on whether
they should be observed for other entities.  But this assumption (even
if we find it to be true without giving ourselves a verbal hernia) is
keeping many of us from asking the really hard but crucial (an
fascinating!) questions: How do moral attitudes develop in children?
What are the cultural and familial forces that cause changes in moral
attitudes across generations and cultures?  How do we incorporate
morality into AI?  (Bill's question.)  As Bill also pointed out, part
of morality's function is to deal with the question "What should I do
now?" which is one that any robot designer has to eventually come to
terms with.

That glory, guilt, shame, reward, punishment, moral attitudes,
indignence, prudishness, and righteousness exist; that they are shot
through thought, language, behavior, culture and history -- all these
facts and many, many more suggest a special structuring of human
beings and of theiry interaction that we will completely miss if we
insist on falling into the quagmire formed by much of the present
discussion.  As long as this continues, we are missing the immense
fruits to be gained (for AI and for our own lives) by answering the
really juicy question: "What is it about the structure of human beings
that makes morality possible?"

How can we devote so much attention to trivialities when in the
process we ignore such a rich field of phenomena?

There have been some good first steps on this group towards an
understanding of morality: discussion of memes and of the instincts
that drive moral formation and evolution.  But they practically form a
subset of measure zero within the whole discussion -- instead, they
should be dominating the discussion: Which particular memes do we need
to posit to begin to understand the pervasiveness and diversity of
moral attitudes?  How can we understand the origins of morality as a
sublimation of the attitudes formed out of early bartering (as
Nietzche suggested)? or how do verbal patterns find their way into our
behavior?  (A question that Foucault discussed, but with so much
devotion to his political agenda, that the true value of his
contribution is obscured, not only by himself, but byi creatures such
as literary critics.)

If we're at all serious about wanting to understand, duplicate, and
ultimately surpass natural intelligence, then arguments about a
robot's claim to rights being its capacity to suffer, or anything else
like that is not a serious step.  What really belongs on something
like on comp.ai.phil are discussions concerning the sub-personal,
personal and social structures and mechanisms behind the the
phenomenon of morality.  The other stuff is better saved for something
like comp.ai.politics.

-Nick


