From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!waikato.ac.nz!comp.vuw.ac.nz!canterbury.ac.nz!cosc.canterbury.ac.nz!chisnall Tue May 12 15:50:33 EDT 1992
Article 5573 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!waikato.ac.nz!comp.vuw.ac.nz!canterbury.ac.nz!cosc.canterbury.ac.nz!chisnall
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Message-ID: <1992May12.213451.5026@csc.canterbury.ac.nz>
>From: chisnall@cosc.canterbury.ac.nz (The Technicolour Throw-up)
Date: 12 May 92 21:34:49 +1200
References: <umpm0INNpv8@early-bird.think.com>
Distribution: world
Organization: Computer Science,University of Canterbury,New Zealand
Nntp-Posting-Host: kahu.cosc.canterbury.ac.nz
Lines: 67

>From  article  <umpm0INNpv8@early-bird.think.com>,  by  moravec@Think.COM (Hans Moravec):
> [...]        If they were, for instance, downloads of my own mind
> (those cyborg implants make it easy), I would hack them to be
> obsessive/autistic/idiot-savant sorts, interested in doing nothing but
> using their (my) full brain power to work on the problems I wanted them
> to (I know it's possible: for brief, glorious periods I sometimes get
> that way now.  With the AIs I'd lock in that state of mind).
>
> When I make an AI, I will be sure to construct it so it wants,
> passionately, to do only and exactly what I want it to do.
> So if you *ask it*, it will say just that.

This threatens to bring us back to a discussion that occupied this  group
at  the  beginning  of  the  year but Hans' comments made me think of the
following scenario.  Imagine that you're in some isolated  environ  (such
as  the  South  Pole  or another planet) where we can't expect anybody to
come to your rescue and that you have with  you  a  small  crew.  Suppose
further  that  there  is some toxic substance (e.g.  a radioisotope) near
your base which will kill you all in, say, 24hrs unless you can  move  it
all  into  some  containment device.  Also you only have the flimsiest of
protective clothing so that anyone who comes into direct contact with the
substance will die a certain death in an hour.

What  to  do?   Ordering  your  crew,  who are all subordinate to you, to
remove this substance will kill them and you'll be up on murder  charges.
Even if they all sign contracts stating that they're willingly doing this
and thereby letting you off any legal difficulties you'll still have your
own personal qualms and you'll probably be haunted by this for some time.
You could sacrifice yourself, saving your crew, but you then have an hour
to  face  the  fact  that  you  are  about to die by your own hand (so to
speak).

Now it so happens that your  base  has  a  neat  whizzo  Star  Trek  like
duplicater  that  can be used to duplicate, almost instantly, and with no
deleterious side effects to the base (or its  energy  supply),  anything,
living  or dead, that is placed within it.  Amongst other things this can
be used to duplicate people.

It seems to me then that the solution to this problem is to run off a few
dozen  copies of yourself and have these copies clean up the toxic spill.
Although most people would face inner mental turmoil at  the  thought  of
either  committing  suicide or of knowingly sending other people to their
depths I suspect that there would be very little turmoil (or even perhaps
none at all) in sending copies of yourself to certain death.

Part  of  this  turmoil  arises  because  we  are  in  effect  destroying
individuals and we hold it as a right  that  individuals  should  not  be
deliberately  destroyed.  But duplication removes this worry.  As long as
either the original or one of the copies remains around afterwards  there
is  still continuity of that individual.  Can we also solve this scenario
by duplicating the crew and sending the crew duplicate outside?  Yes  but
only  because  the  crew agree to it.  The point I'm trying to get across
here is that although we would face inner turmoil at giving lethal orders
to someone else or at having someone else give lethal orders to us few of
us would, I  suspect,  face  any  turmoil  at  giving  lethal  orders  to
ourselves  or at having ourselves give lethal orders to us.  If you can't
trust yourself who can you trust?

Getting back to Hans' post then the question arises as to whether  it  is
immoral  to kill AI's who happen to be immediate copies of yourself.  How
many people in  this  group  haven't  at  some  point,  in  the  face  of
encroaching  deadlines, wished that they could fork() themselves in order
to get everything done?
--
Just my two rubber ningis worth.
Name: Michael Chisnall  (chisnall@cosc.canterbury.ac.nz)
I'm not a .signature virus and nor do I play one on tv.


