From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cunews!nrcnet0!bnrgate!bcars267!bmdhh243!bnr.ca!agc Mon May 25 14:05:23 EDT 1992
Article 5650 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cunews!nrcnet0!bnrgate!bcars267!bmdhh243!bnr.ca!agc
>From: agc@bnr.ca (Alan Carter)
Subject: Re: AI failures
Message-ID: <1992May14.134232.27397@bnr.uk>
Keywords: AI death morals sex
Sender: news@bnr.uk (News Administrator)
Nntp-Posting-Host: bmdhh298
Organization: BNR-Europe-Limited, Maidenhead, England
References: <1992May11.160456.15469@math.okstate.edu> <1992May11.183017.14806@psych.toronto.edu> <1992May11.210524.30977@mp.cs.niu.edu> <1992May12.002440.5501@psych.toronto.edu> <unaphINNpv8@early-bird.think.com>
Date: Thu, 14 May 1992 13:42:32 GMT

In article <unaphINNpv8@early-bird.think.com>, moravec@Think.COM (Hans Moravec) writes:
|> In article <1992May12.002440.5501@psych.toronto.edu>, michael@psych.toronto.edu (Michael Gemar) writes:
|> |> 
|> |> My comment above was not regarding the notions put forth regarding the
|> |> treatment of AI's, but the treatment of *humans*.  Both Hans and Eric have
|> |> essentially proposed abolishing ethics, not only in the realm of
|> |> computers, but in the realm of people.  Yes, I agree that AI ethics is 
|> |> a tricky question, but I will assert that it is at least a *question*.
|> |> Hans thinks the whole notion is ridiculous because *ethics itself* is
|> |> a meaningless social construct, and Eric (the poster who I was responding
|> |> to in the posting quoted above) states fatalistically that *lots* of
|> |> thinking things die everyday, so what's another more or less, human or
|> |> computer.  It is this denial of ethics in the *human* realm that has
|> |> me worried.  I sure don't want to be on a desert island with these
|> |> guys...
|> |> 
|> 
|>                                                               I
|> think the usual social ethics are the basis of good relationships,
|> and successful functioning in life, but they are a pragmatic system,
|> not a higher truth.  Making them an absolute ossifies your thinking,
|> and produces ridiculously inappropriate suggestions, like the ones you
|> have made about what we should do in radically different circumstances
|> in the future, where AIs can be created cheaply in the wink of an eye,
|> and where humans can be backed up and duplicated almost as easily.

We already do this. Compare two situations. First, in 2090 I spawn a version 
of myself that is supposed to browse the library until it finds that 
*really* old science fiction story that I can barely remember. The child 
self will have loads of fun, browsing a great deal of stuff without ever 
feeling guilty about 'wasting time', and when he finds the story, he dumps
the best bits to the nearest active self, before terminating. I will not
have to stop executing for the child to run in cyberspace, and the child
will be able to summarize critical experience back into a *community* of
selves with n-way continuity of memory.

Second, in 1988 I went to some friends' wedding. I got very drunk, and have
(lose, lose) no memory of the last couple of hours of the party, although 
I am assured that I was there, and participated. I have no continuity of
memory with that self. That self *terminated*, and I was restored from
long-term memory backup. No-one suggested that I was immoral for sobering 
up, and killing the self that had been running. No-one suggested that the
alcohol inspired self was immoral for denying scarce hardware to my usual
self. (I doubt that anyone even commented that the alcohol inspired self
was obnoxious, because everyone there knew me, and I'm usually obnoxious.)
This effect occurs in society every day, without benefit of parallelised
selves or the kind of cross-memory that *communities* of ourselves might
have. 

If we can duplicate ourselves cheaply, a working set of active selves with
continuity of memory established from previous working sets seems to me more
likely than holding onto the concept of a singular self. (Although for me
life will go on pretty much as it did in the meat; at the end of each day I 
talk to myself and go to sleep.) Singular selves and communities also get
to buy hardware. If I can afford enough boxen, why shouldn't I use them to
create divergent versions of myself that eventually have such differing 
world views that memory exchange is no longer possible? If anyone else
finds that I've cracked their systems and installed my own patterns (rape),
then they can rm me (abortion). Then as now, communities of me will own our
own bodies. 

If we can duplicate ourselves, why do we need AIs? I will continue to use 
dumb programs for my ray-tracing jobs, and if a problem is difficult 
(interesting) enough to need an AI, I think I'd rather trust (enjoy the use
of all those MIPS) myself. Mad Scientists and Mystic Mothers might create AIs
for the sheer fun of it, and that may be where the *real* moral issues turn
up. Building a self-aware system without due care and attention? Cruelty to
children? Unlawful experimentition with artificial schizophrenia? Might the
professional moralists get rotating eyeballs about new victimless crimes like
unlawfully accessing the body image of a member of the opposite sex? 

                   Alan (Reno Variant, philosophy mindline)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
       My opinions? Think about it. Virtual reality is between my ears. 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


