From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!spool.mu.edu!tulane!ukma!nsisrv!amarna.gsfc.nasa.gov!jones Thu Apr 30 15:23:05 EDT 1992
Article 5299 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!spool.mu.edu!tulane!ukma!nsisrv!amarna.gsfc.nasa.gov!jones
>From: jones@amarna.gsfc.nasa.gov (Jones)
Newsgroups: comp.ai.philosophy
Subject: A.I. failures
Message-ID: <27APR199223245630@amarna.gsfc.nasa.gov>
Date: 28 Apr 92 03:24:00 GMT
Article-I.D.: amarna.27APR199223245630
Sender: usenet@nsisrv.gsfc.nasa.gov (Usenet)
Organization: NASA Goddard Space Flight Center, Greenbelt, Md. USA
Lines: 14
News-Software: VAX/VMS VNEWS 1.4-b1
Nntp-Posting-Host: amarna.gsfc.nasa.gov

A distinguished scientist (I believe it was James Watson, but I won't
swear to it) has asked:  What do we do with the [A.I.] failures?

Specifically, suppose we get our A.I. program to a mental age of three,
then find a bug which ruins the effectiveness of the program.  Do we
then just "kill" or erase the program?  One is reluctant to say yes.
Do we not have ethical obligations to an intelligent being?

The technical background for this is (partly) my belief that the way
to get a "true" A.I. (whatever that means) is to start it out as an
"artificial infant" which would then go through infancy, childhood, etc., 
in much the traditional way.  (Tough software problem.)

Tom


