From newshub.ccs.yorku.ca!torn!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sgiblab!zaphod.mps.ohio-state.edu!uwm.edu!news.bbn.com!noc.near.net!ns.draper.com!news.draper.com!news Wed Oct 14 14:59:04 EDT 1992
Article 7251 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sgiblab!zaphod.mps.ohio-state.edu!uwm.edu!news.bbn.com!noc.near.net!ns.draper.com!news.draper.com!news
>From: mjl@draper.com (Michael J. LeBlanc)
Subject: Re: AI rights |:-)
Message-ID: <1992Oct13.211446.15615@draper.com>
Sender: nntp@draper.com (NNTP Master)
Nntp-Posting-Host: mjl3825.draper.com
Reply-To:  (Michael J. LeBlanc)
Organization: The Charles Stark Draper Laboratory, Inc.
References: <ARO.92Oct5165032@csthor.aber.ac.uk>
Date: Tue, 13 Oct 1992 21:14:46 GMT
Lines: 9

There is a very simple solution to this seeming moral dilemma.
Simply design the hypothetical "sentient" AI beings so that
they LOVE to SERVE us... In that case, what we call "Freedom"
would be anethema to them, and their state of servitude would
put them in a state of bliss...  The issues of human morality
would be rendered irrelevant :-)

Michael J. LeBlanc       /* an opinion not subject to change
mjl@draper.com           is probably not worth having... */


