From newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system Mon Oct 19 16:59:47 EDT 1992
Article 7328 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system
Newsgroups: comp.ai.philosophy
Subject: Re: AI rights |:-)
Message-ID: <wJ8wsB2w165w@CODEWKS.nacjack.gen.nz>
>From: system@CODEWKS.nacjack.gen.nz (Wayne McDougall)
Date: Mon, 19 Oct 92 15:03:07 NZDST
References: <1992Oct13.211446.15615@draper.com>
Organization: The Code Works Limited, PO Box 10 155, Auckland, New Zealand
Lines: 23

mjl@draper.com (Michael J. LeBlanc) writes:

> There is a very simple solution to this seeming moral dilemma.
> Simply design the hypothetical "sentient" AI beings so that
> they LOVE to SERVE us... In that case, what we call "Freedom"
> would be anethema to them, and their state of servitude would
> put them in a state of bliss...  The issues of human morality
> would be rendered irrelevant :-)
> 
> Michael J. LeBlanc       /* an opinion not subject to change
> mjl@draper.com           is probably not worth having... */

Problem: What if the capabilit to choose whether to serve / love etc is 
inextricably linked to consciousness? No benefit in having suer-loyal 
robot idiots. And I wouldn't be impressed with a sentient AI system 
that wouldn't tell me when I was wrong because it loved me so much it 
didn't want to hurt my feelings.

-- 
  Wayne McDougall, BCNU
  This .sig unintentionally left blank.

Hello! I'm a .SIG Virus. Copy me and spread the fun.


