From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:50:30 EDT 1992
Article 5568 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Morality and artificial minds
Organization: Department of Psychology, University of Toronto
References: <uetinINNco5@early-bird.think.com> <1992May10.003028.19333@psych.toronto.edu> <1992May11.170615.44727@spss.com>
Message-ID: <1992May12.004333.9259@psych.toronto.edu>
Date: Tue, 12 May 1992 00:43:33 GMT

In article <1992May11.170615.44727@spss.com> markrose@spss.com (Mark Rosenfelder) writes:

> We should treat artificial minds morally, once they exist.  
>However, I think discovering what this actually means will be far from simple.

[various questions, such as would keeping AI's be slavery, deleted]

>Fortunately there's a simple solution to many of these problems.
>*Ask them.*  Recent history should make it clear that making unilateral
>judgments about the rights and desires of another group is highly immoral.
>Let the AIs tell us the answers to all these questions!

But I have an even *simpler* solution.  Program them to always answer  
that they *like* doing drudge work, that they *enjoy* being our slaves,
that they *wouldn't mind at all* if we deleted them from our
hard drive.  (This is reminiscent of the scene in _Hitchhiker's
Guide to the Galaxy_, where sentient cows have been bred that *like* to
be eaten...)


While this may seem like a frivolous suggestion, it is, I think, a likely-
to-be-used solution (why create entities that *want* to be independent?).
However, since I like to worry about these things, I wonder if there isn't
a "second-order" wrong being committed here.  Would it be OK for us to 
genetically engineer humans who *wanted* to be slaves?  Hmmm....


- michael



