From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra Mon May 25 14:04:44 EDT 1992
Article 5579 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra
>From: chandra@cannelloni.cis.ohio-state.edu (B Chandrasekaran)
Subject: Re: Morality and artificial minds
Message-ID: <1992May12.135033.6650@cis.ohio-state.edu>
Sender: news@cis.ohio-state.edu (NETnews        )
Organization: The Ohio State University, Department of Computer and Information Science
References: <1992May12.004333.9259@psych.toronto.edu> <1992May12.213616.5027@csc.canterbury.ac.nz>
Date: Tue, 12 May 1992 13:50:33 GMT
Lines: 60

Michael Gemar seemed pretty shocked by Han Moravec's utilitarian views
about morality and wanted to know how other AI people feel about the
morality of dealing with AI creatures, say about offing them because
they are too numerous ("occupy too much disk space") or whatever.

I have been doing AI for a while now, so I qualify as an example of
"other AI people."  This is what I think:

i. Intelligence (in the sense of marshalling information processing
resources to achieve goals, or form explanatory hypotheses about the 
world) is not ipso facto a mind.  It is not obvious to me that AI's in
that sense have to be treated as having the kind of mind that we have.

ii. If we end up making artificial creatures in the future who are not
only intelligences but also minds (either because it turns out that
being a mind is simply being an information processor of a certain
type, or because we have figured out how to string together artificial
proteins to form an organism -- or a brain which supports a mind),
we should treat them appropriately depending upon what kind of mind
they are and how they would like to be treated.  Again it is not
obvious to me that all minds necessarily would not want to be
extinguished or suspended.  It seems quite possible to me that we should
be able to make artificial creatures in whom we have implanted a high
degree of desire to sacrifice on behalf of humans, for example.  It is
not even clear that we should call it "sacrifice" unless some innate
tendency to want to go on is necessarily associated with being a mind.

iii. If we make carbon copies of our minds, then we should treat them
as if we would treat other humans. Here I don't understand why Michael
is so shocked by Hans' views about the utilitarian origin about our
sense of morality.  It seems to me that we wear and should wear too
hats when we discuss this kind of thing.  On one hand, as human beings
existing in a social context, we want to feel strongly about certain
things: that we don't want to be treated as intruments, we don't want
to treat people we care about as instruments, and so on.  On the other
hand, as scientists of morality, it seems certainly reasonable to
hypothesize that morality has an (instrumental) useful role in
society, that cultures and societies have had all kinds of differing
views on whether a particular behavior is moral or not, that it is
hard to uphold the view that there is a clear, unambiguous,
self-evident and consistent set of moral axioms from which guidance
can be obtained for all moral questions. (I know there are people who
think there are, but that is by no means a universal philosophical
position.)  In fact, I think it makes a lot of sense computationally
to design into our species a capacity for moral feeling, but leave the
particulars of what that should be to be determined by cultural
evolution so that behavior can be optimized to local conditions
(economics, e.g.).

  When we feel love for a certain person, say our children, again we
feel quite confortable in taking on this double view.  We accept and
enjoy the feeling of love, but we are also capable of seeing that as
an evolutionary, utilitarian (for the species) response. I don't see
why these two views should be in conflict.  

Having said this, I must say that I differ from Hans and a lot of AI
people in that I see in information processing no primitives out of
which emotions can be constructed and that equating being a mind with
being an intelligence is a mistake.



