From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!noiro.acs.uci.edu!unogate!stgprao Tue May 12 15:49:27 EDT 1992
Article 5456 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!noiro.acs.uci.edu!unogate!stgprao
>From: stgprao@xing.unocal.com (Richard Ottolini)
Subject: Re: A.I. failures
Message-ID: <1992May7.151001.9768@unocal.com>
Originator: stgprao@xing
Sender: news@unocal.com (Unocal USENET News)
Organization: Unocal Corporation, Anaheim, California
References: <6MAY199218170584@amarna.gsfc.nasa.gov>
Date: Thu, 7 May 1992 15:10:01 GMT
Lines: 10

In article <6MAY199218170584@amarna.gsfc.nasa.gov> jones@amarna.gsfc.nasa.gov (Jones) writes:
>Mr. Collins suggested that we may consider it OK to kill an artificially
>intelligent person because he/she is of a different species from us.  I
>suggest that this may not fly in a real case.  If we kill an artificial
>three year old boy, his parents will suffer agonies of grief, assuming they
>love him.  (If they don't, the whole project is in *real* trouble.)

There was an Outer Limits episode about killing a clone/duplicant who was
"identical" to the original, but was a assigned a short task.
Also consider the movie Blade Runner.


