From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!waikato.ac.nz!rmarsh Tue May 12 15:49:58 EDT 1992
Article 5511 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!waikato.ac.nz!rmarsh
>From: rmarsh@waikato.ac.nz
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Message-ID: <1992May9.162620.7982@waikato.ac.nz>
Date: 9 May 92 16:26:20 +1200
References: <1992May1.193141.24350@psych.toronto.edu> <zlsiida.144@fs1.mcc.ac.uk> <1992May7.152447.7930@waikato.ac.nz> <727@ckgp.UUCP>
Organization: University of Waikato, Hamilton, New Zealand
Lines: 47

In article <727@ckgp.UUCP>, thomas@ckgp.UUCP (Michael Thomas) writes:
>  Random Question: First I would say that it is hard to say that it is ethical
> or moarlly correct to kill (anything) an AI? I would question weither your
> concerns were with "death" or with the loss of a "being" which could have a
> positive influence on the world?
> 
I like to try to use a 'Veil of Ignorance' (Rawls?) approach. If I didn't
know which side of a rule I was going to end up on, would I still think it
was fair? If I were an AI with the thought capacity of a 3 yr old, I think
I would be rather hurt that humans didn't think I had a right to live. In
answer to your question, I guess I'm interested in the "death of a being"
regardless of its usefulness.

>  Small Speach Time: If you say that you are worried about death; well in all of
> my AI-prototypes I establish means by which if you shut the system down it
> can recover (with knowledge of how long it has been down; so it can b*tch!)
> 
Good for you. Do you think that should be mandatory procedure?

>  I guess what I am trying to say is that AI systems are not as vulnerable as
> we are thinking they are. (?) You might argue that the "MIND" could be damaged
> but this would depend on the state in which the system crash/un-plugged and
> the state in which it was brought back up (internal states...) Please explain
> what you feel/think is occuring when we un-plug an AI? Also, what about
> system crashes? are these acceptable losses?
>
Personally I dont see AI becoming reality through software. I think the
architecture has as much, if not more to do with intelligence than the
'program'. The program may determine how 'smart' an intelligence is, but I
think the fact of its intelligence will depend on architecture too.

What happens to an AI when it is unplugged will depend on both the machine
itself and the software. A software AI construct (if such is possible) will
not be 'dead' IMO until there is no chance of rebooting it. This probably
requires that all copies be destroyed. I'm assuming here that the file is
updated every time the AI is 'switched off', if not then the changes to the
AI are lost after each session - what this constitutes death-wise I don't
know - perhaps another thread?
If the AI is machine dependent, then unplugging the power is probably
equivalent to knocking it out, or perhaps putting it in a coma - not nice,
but not murder.
-- 
Robert 'Stumpy' Marsh | Brought to you from the bottom of the world
rmarsh@waikato.ac.nz  | both topographically and socio-politically.
+64 7 855 4406        | Whatever happened to Godzone?
    I can't reply to E-Mail but don't let that stop you sending!
    SnailMail: 95 Fairfield Rd, Hamilton, Aotearoa (New Zealand)


