From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!ckgp!thomas Tue May 12 15:50:31 EDT 1992
Article 5569 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!ckgp!thomas
>From: thomas@ckgp.UUCP (Michael Thomas)
Newsgroups: comp.ai.philosophy
Subject: AI DEATH (was: Re: AI failures)
Message-ID: <728@ckgp.UUCP>
Date: 11 May 92 23:23:57 GMT
References: <1992May1.193141.24350@psych.toronto.edu> <zlsiida.144@fs1.mcc.ac.uk> <1992May9.162620.7982@waikato.ac.nz>
Organization: F.O.C.U.S. Systems, MI
Lines: 86

In article <1992May9.162620.7982@waikato.ac.nz>, rmarsh@waikato.ac.nz writes:
> I like to try to use a 'Veil of Ignorance' (Rawls?) approach. If I didn't
> know which side of a rule I was going to end up on, would I still think it
> was fair? If I were an AI with the thought capacity of a 3 yr old, I think
> I would be rather hurt that humans didn't think I had a right to live. In
> answer to your question, I guess I'm interested in the "death of a being"
> regardless of its usefulness.

  I am very intrested at how everyone is considering an AI system to be
equal to that of a human. I guess what I mean is that an AI will not
have the same perceptions that WE do about life and death (or time, or
practically anything else) So, my point is, that WE are all assuming that
AI's are going to think that death is bad (or be scared by it?). I would
just like to offer that an AI also has the potiential to live FOREVER and
that this is why this question is so important.

  Considering that an AI could in fact live FOREVER (or til the sun explodes)
Perhaps it might be a good idea to program/hardwire in instructions that
when the system is not "a possitive influence" on the the world it will
"self-distruct" (for lack of a better term.) So, before anyone askes: How can
you tell if you are a "possitive influence"? perhaps something to do with
how much an AI is used, how much an AI is "working", how much an AI is 
developing/learning/etc... The point being to give the system a limit. I guess
I just see the same problem as with abortion, who is to say how many AI's
should be created? Are we going to limit the procreation of AI's? Is this
equally as bad as killing them?

>> My systems are capable of recovering from a crash...
> Good for you. Do you think that should be mandatory procedure?

  I think that something will have to be done, if atleast for the sake of
  a system crashing due to a storm...drive failure...virues...etc... these
  things happen to everyone, why should we let these affect an AI if we
  can avoid it?

> Personally I dont see AI becoming reality through software. I think the
> architecture has as much, if not more to do with intelligence than the
> 'program'. The program may determine how 'smart' an intelligence is, but I
> think the fact of its intelligence will depend on architecture too.

  We don't have to debate this, but having a pretty good (great!) understanding
  of "human hardware" I am not conveince that an AI can be produced without 
  software... actually the ultimate system for me will be a combination of
  both that can not exist without either. Of course, a persons definition
  of Intelligence will affect what they mean by AI; memory, perception, etc
  are important parts of the Mind (or important in that the mind does use
  these qualities from the brain) but I seem to continually find that the
  Mind is not information (stimulus) so hardware alone will have a hard time
  with other aspects of intelligence which do not fall within the realm
  of brain functions (exp. awareness,understanding,etc...)
  [in other words I agree with you... 8^) ]

> What happens to an AI when it is unplugged will depend on both the machine
> itself and the software. A software AI construct (if such is possible) will
> not be 'dead' IMO until there is no chance of rebooting it. This probably
> requires that all copies be destroyed. I'm assuming here that the file is
> updated every time the AI is 'switched off', if not then the changes to the
> AI are lost after each session - what this constitutes death-wise I don't
> know - perhaps another thread?

  My current prototype updates the needed information every duration of
  time (like 5-15 minutes) or whenever the system is not getting heavy usage.

  I would agree that the system is not dead if the system can recover at the
  point were it left off, and be aware of the time difference. This is why
  a concern for AI-death should not exist... systems will not die they just
  may be down from a little while thru forever...

  I would also agree that the system is "dead" only when it can not be restored
  via software/backup/etc... but as I have mentioned before my prototype 
  can procreate in the sense that its "offspring" maintain its knowledge,
  with the possiblity of  intergrating knowledge from other AI's (mating)...
  So even if an AI were to be totally distroyed it will most likely have
  offspring with the sum of its knowledge (up to a given point...) so
  all is not lost.

> If the AI is machine dependent, then unplugging the power is probably
> equivalent to knocking it out, or perhaps putting it in a coma - not nice,
> but not murder.

  agreed.

------------------------
Thank you,            ||  "The mind is not located in the body,
Michael Thomas        ||   The body is located in the mind." 
(..uunet!ckgp!thomas) ||---------------------------------------- 


