From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!jvnc.net!yale.edu!spool.mu.edu!hri.com!ukma!nsisrv!amarna.gsfc.nasa.gov!jones Tue May 12 15:49:20 EDT 1992
Article 5443 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!jvnc.net!yale.edu!spool.mu.edu!hri.com!ukma!nsisrv!amarna.gsfc.nasa.gov!jones
>From: jones@amarna.gsfc.nasa.gov (Jones)
Newsgroups: comp.ai.philosophy
Subject: A.I. failures
Message-ID: <6MAY199218170584@amarna.gsfc.nasa.gov>
Date: 6 May 92 22:17:00 GMT
Sender: usenet@nsisrv.gsfc.nasa.gov (Usenet)
Organization: NASA Goddard Space Flight Center, Greenbelt, Md. USA
Lines: 9
News-Software: VAX/VMS VNEWS 1.4-b1
Nntp-Posting-Host: amarna.gsfc.nasa.gov

Mr. Collins suggested that we may consider it OK to kill an artificially
intelligent person because he/she is of a different species from us.  I
suggest that this may not fly in a real case.  If we kill an artificial
three year old boy, his parents will suffer agonies of grief, assuming they
love him.  (If they don't, the whole project is in *real* trouble.)

Tom

(All opinions my own.)


