From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!waikato.ac.nz!canterbury.ac.nz!cosc.canterbury.ac.nz!chisnall Tue May 12 15:50:34 EDT 1992
Article 5575 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!waikato.ac.nz!canterbury.ac.nz!cosc.canterbury.ac.nz!chisnall
Newsgroups: comp.ai.philosophy
Subject: Re: Morality and artificial minds
Message-ID: <1992May12.213616.5027@csc.canterbury.ac.nz>
>From: chisnall@cosc.canterbury.ac.nz (The Technicolour Throw-up)
Date: 12 May 92 21:36:13 +1200
References: <1992May12.004333.9259@psych.toronto.edu>
Distribution: world
Organization: Computer Science,University of Canterbury,New Zealand
Nntp-Posting-Host: kahu.cosc.canterbury.ac.nz
Lines: 47

>From article <1992May12.004333.9259@psych.toronto.edu>, by michael@psych.toronto.edu (Michael Gemar):
> While this may seem like a frivolous suggestion, it is, I think, a likely-
> to-be-used solution (why create entities that *want* to be independent?).
> However, since I like to worry about these things, I wonder if there isn't
> a "second-order" wrong being committed here.  Would it be OK for us to
> genetically engineer humans who *wanted* to be slaves?  Hmmm....

I was going to post something about this second  order  aspect  too  but
you've beaten me to it.  Let me instead point out a "third order" aspect
that seems to have been overlooked here.

In the sort of situation that Hans has written about it  won't  just  be
humans and AI's interacting but AI' and AI's interacting as well.  If we
end up being able to produce AI's who's to say that AI's won't  be  able
to  write  their  own  in  turn.   In  fact  if  we ever do learn how to
construct AI's the chances are fairly good that most AI's  will  end  up
being  written  by  other AI's with only a little guidance (e.g. command
line options :-) from humans in view of  the  amount  of  drudgery  that
would  probably  be  involved.   Many of the AI's that Hans writes about
won't have been created by him but will have an ancestry tree a few deep
with him at the top and AI's elsewhere.

I  mentioned  above  some  third  order effects.  So far we've had moral
dilemma's involving what we can and can't do to AI's, and  second  order
worries   abouot   what   happens   if   we  construct  special  purpose
"sacrificial" AI's that *want* to be killed when finished (and  in  fact
consider  it  heinously immoral tfor them to be kept around afterwards).
But what happens when the AI's themselves start to contemplate  some  of
these moral issues.  What happens, say, if the ANSI AI committee (all of
whose members are themselves AI) meets,  discusses  this  situation  and
decides  that  they'll write some sacrificial AI's for us humans to use.
Note that Michael Gemar's second  order  worries  need  not  necessarily
apply  here  since  we  may  assume  for  the  purposes  of this thought
experiment that the ANSI AI committee are able to examine their own code
and  determine  whether  particular  moral  choices have been built into
them.

Would it be moral for us to accept a gift of sacrificial AI's from  AI's
who  had,  without  any  imposition  of  moral stricture from us, freely
chosen to construct such things? (This dilemma is related, I  think,  to
the one I raised in my previous message about working with duplicates of
yourself.  If the Ai's willingly provide us with sacrificial  duplicates
of themselves do we have any grounds for moral concern?)
--
Just my two rubber ningis worth.
Name: Michael Chisnall  (chisnall@cosc.canterbury.ac.nz)
I'm not a .signature virus and nor do I play one on tv.


