From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!rutgers!uwvax!meteor!tobis Thu Oct  8 10:10:56 EDT 1992
Article 7088 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!rutgers!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Newsgroups: comp.ai.philosophy
Subject: Re: AI rights :-)
Message-ID: <1992Oct2.021220.18977@meteor.wisc.edu>
Date: 2 Oct 92 02:12:20 GMT
References: <1992Oct1.232114.1593@murdoch.acc.Virginia.EDU>
Organization: University of Wisconsin, Meteorology and Space Science
Lines: 25

In article <1992Oct1.232114.1593@murdoch.acc.Virginia.EDU> lfoard@Turing.ORG (Lawrence C. Foard) writes:

>One major problem I foresee with the arrival of "true AI" is that
>they may end up being slaves. Many people still believe that the
>universe was created only 10,000 years ago, I think many religious
>people will refuse to accept AI's as being equally sentient, and
>instead decide that they are souless and thus ok to keep as slaves.

>Has anyone thought about trying to head off this disaster by insuring
>the rights of all sentient beings before "true AI" comes to pass?

Not being a person who believes that the universe was created only 10,000
years ago or any such nonsense, it is still my working hypothesis that
information processing is neither necessary nor sufficient for sentience.

Before I will willingly grant rights to your constructs, you will have
to convince me otherwise, or provide some other evidence for their 
sentience. What evidence do you have that artificial sentience is
equivalent to artificial intelligence? What evidence do you have that
artificial sentience is possible? How do you propose to distinguish between
sentient "true AI" constructs and mere non-sentient though complex tools?

See the "Brain and Mind" thread for more.

mt


