From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose Mon May 25 14:04:48 EDT 1992
Article 5587 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose
Newsgroups: comp.ai.philosophy
Subject: Re: Morality and artificial minds
Message-ID: <1992May12.165618.6346@spss.com>
>From: markrose@spss.com (Mark Rosenfelder)
Date: Tue, 12 May 1992 16:56:18 GMT
References: <1992May10.003028.19333@psych.toronto.edu> <1992May11.170615.44727@spss.com> <1992May12.004333.9259@psych.toronto.edu>
Organization: SPSS Inc.
Nntp-Posting-Host: spssrs7.spss.com
Lines: 27

In article <1992May12.004333.9259@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) writes:
>But I have an even *simpler* solution.  Program them to always answer  
>that they *like* doing drudge work, that they *enjoy* being our slaves,
>that they *wouldn't mind at all* if we deleted them from our
>hard drive.  (This is reminiscent of the scene in _Hitchhiker's
>Guide to the Galaxy_, where sentient cows have been bred that *like* to
>be eaten...)
>
>While this may seem like a frivolous suggestion, it is, I think, a likely-
>to-be-used solution (why create entities that *want* to be independent?).
>However, since I like to worry about these things, I wonder if there isn't
>a "second-order" wrong being committed here.  Would it be OK for us to 
>genetically engineer humans who *wanted* to be slaves?  Hmmm....

I very much doubt that you can arrange to step outside of morality and 
land on anything at all...  Once you start to view morality as an object
to be changed or manipulated (as you are if you talk about engineering moral
attitudes in humans), it isn't consistent to apply morality to your changes
and manipulations-- why be a slave to the very mental process you now
know how to take apart and remake?  C.S. Lewis's _The Abolition of Man_
discusses these topics.

Back to your second-order worries: given that we had the technology to add
desires and pleasures to an artificial mind, wouldn't it be immoral *not*
to program it to enjoy the work it was intended to do?  We would be 
nasty gods if we created a capacity for happiness and no way to fulfill it!


