From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!pacbell.com!decwrl!waikato.ac.nz!rmarsh Thu Apr 30 15:23:07 EDT 1992
Article 5302 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!pacbell.com!decwrl!waikato.ac.nz!rmarsh
>From: rmarsh@waikato.ac.nz
Newsgroups: comp.ai.philosophy
Subject: Re: A.I. failures
Message-ID: <1992Apr28.180730.7675@waikato.ac.nz>
Date: 28 Apr 92 18:07:30 +1200
References: <27APR199223245630@amarna.gsfc.nasa.gov>
Organization: University of Waikato, Hamilton, New Zealand
Lines: 42

In article <27APR199223245630@amarna.gsfc.nasa.gov>, jones@amarna.gsfc.nasa.gov (Jones) writes:
> A distinguished scientist (I believe it was James Watson, but I won't
> swear to it) has asked:  What do we do with the [A.I.] failures?
> 
> Specifically, suppose we get our A.I. program to a mental age of three,
> then find a bug which ruins the effectiveness of the program.  Do we
> then just "kill" or erase the program?  One is reluctant to say yes.
> Do we not have ethical obligations to an intelligent being?
> 
Sticky. We are assuming that the AI is _actually_ intelligent, right? But
that it will be mentally retarded or some such? Okay, it's intelligent, but
is it alive? Does that matter? Does it have a right to continue to live (or
if not alive, function)? Is the intelligence self supporting? Could we
allow it to go out into the world and make its own way, or is it so stunted
(or dangerous) that that is not an option?

If we decide that the AI (need it be a program? Perhaps it is the structure
which is more important) does not have rights and we do, where do we draw
the line? If we are also capable of adding AI technology to enhance our own
abilities. Do people lose their rights if they augment themselves with mind
technology?

Okay, okay so enough of answering a question with even more questions...

> The technical background for this is (partly) my belief that the way
> to get a "true" A.I. (whatever that means) is to start it out as an
> "artificial infant" which would then go through infancy, childhood, etc., 
> in much the traditional way.  (Tough software problem.)
> 
I agree that this is the most likely way we are going to get a 'true' AI,
though I doubt that there will be much remaining of any original software
by the time it 'matures'. (I believe in neural-nets, can you tell?)
Unfortunately funding for this kind of research would probably be even
harder to get than for other, more short term, more predictably successful
projects.

-- 
Robert 'Stumpy' Marsh | Brought to you from the bottom of the world
rmarsh@waikato.ac.nz  | both topographically and socio-politically.
+64 7 855 4406        | Whatever happened to Godzone?
    I can't reply to E-Mail but don't let that stop you sending!
    SnailMail: 95 Fairfield Rd, Hamilton, Aotearoa (New Zealand)


