From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!ncar!ames!olivea!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Dec 16 11:00:40 EST 1991
Article 1991 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!ncar!ames!olivea!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: AI as the Next Stage in Evolution
Message-ID: <5827@skye.ed.ac.uk>
Date: 9 Dec 91 21:42:07 GMT
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 18

1. Suppose we can create AIs that don't mind being, effectively,
slaves.  Maybe they don't even know they're slaves (because they're
not conscious, or because they can't think in the ways needed to
realize this).

2. Suppose we learn a lot about the human mind from AI and Cog Sci,
so that we can make humans who don't mind being slaves.

Should we recognize either sort as persons?  Give them voting
rights?  (Even if constructed to vote a certain way?)  What if
their behavior is basically human except that they say things
like "no, I don't mind doing this boring, hazardous task" or
"of course I'll vote for Bill"?  (Sorry, Bill.)  Would the
net.behaviorists say they passed the Turing (or similar) Test
and so ought to have the same rights we do?  If (1) is ok,
what's wrong with (2)?

-- jd


