From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!nsisrv!amarna.gsfc.nasa.gov!jones Tue May 12 15:48:30 EDT 1992
Article 5350 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!nsisrv!amarna.gsfc.nasa.gov!jones
>From: jones@amarna.gsfc.nasa.gov (Jones)
Subject: A.I. failures
Message-ID: <30APR199222333608@amarna.gsfc.nasa.gov>
News-Software: VAX/VMS VNEWS 1.4-b1  
Sender: usenet@nsisrv.gsfc.nasa.gov (Usenet)
Nntp-Posting-Host: amarna.gsfc.nasa.gov
Organization: NASA Goddard Space Flight Center, Greenbelt, Md. USA
Date: Fri, 1 May 1992 02:33:00 GMT
Lines: 13

Several authors have compared the killing of an artificial intelligence
with the killing of an animal for food.  I don't think these two are
really comparable.  An artificial person with the intelligence of a three-
year-old would be very much like a three-year-old child and would play a
similar role in his family and might well occupy a similar status in our
ethical and legal systems.  It's easy to say, "Yes, kill him."  But
could you look into his pretty blue eyes and do it?  Remember, the
"child's" mother and father love him.  

The alternative is to keep the defective intelligence running.  This ties up
expensive computers real' fast.

Tom


