From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!ckgp!thomas Tue May 12 15:49:28 EDT 1992
Article 5458 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!ckgp!thomas
>From: thomas@ckgp.UUCP (Michael Thomas)
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Summary: ethics? ethics? confusion?
Message-ID: <727@ckgp.UUCP>
Date: 7 May 92 17:00:33 GMT
References: <1992May1.193141.24350@psych.toronto.edu> <zlsiida.144@fs1.mcc.ac.uk> <1992May7.152447.7930@waikato.ac.nz>
Organization: F.O.C.U.S. Systems, MI
Lines: 34

In article <1992May7.152447.7930@waikato.ac.nz>, rmarsh@waikato.ac.nz writes:
> In article <1992May6.201601.10052@mp.cs.niu.edu>, rickert@mp.cs.niu.edu (Neil Rickert) writes:

> >   May I suggest that you wait until AI has been achieved until you discuss
> > the ethics of pulling the plug.  It all seems quite premature to me.

> By then it will be too late. Ideally we would want to know whether it was
> ethically acceptable already when we first came to that bridge. Otherwise
> we may allow someone to pull the plug on a machine (intelligence) we may 
> later decide was morally entitled to continued existence. What you are
> suggesting is like saying we shouldn't bother with civil defense procedures
> until the earthquake/tsunami/fire/flood hits.

 Random Question: First I would say that it is hard to say that it is ethical
or moarlly correct to kill (anything) an AI? I would question weither your
concerns were with "death" or with the loss of a "being" which could have a
positive influence on the world?

 Small Speach Time: If you say that you are worried about death; well in all of
my AI-prototypes I establish means by which if you shut the system down it
can recover (with knowledge of how long it has been down; so it can b*tch!)
                    If you you are concerned about lossing a positive influence
on the world, well you can always create (run) another AI on another system.
(my systems have the capability to "bread" allowing them to share/intergrate
knowledge and execute a new system with the combined knowledge, but that is
a different story...)

 I guess what I am trying to say is that AI systems are not as vulnerable as
we are thinking they are. (?) You might argue that the "MIND" could be damaged
but this would depend on the state in which the system crash/un-plugged and
the state in which it was brought back up (internal states...) Please explain
what you feel/think is occuring when we un-plug an AI? Also, what about
system crashes? are these acceptable losses?

==============================================================================
Thank you,            ||  "Sol est invisiblis in hominibus, in terra vero
Michael Thomas        ||   visibilis, tamen ex uno et eodem sole sunt ambo"
(..uunet!ckgp!thomas) ||                    -- Theatrum Chemicum (Ursel, 1602)
==============================================================================


