From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!paladin.american.edu!darwin.sura.net!gatech!mailer.cc.fsu.edu!sun13!vsserv.scri.fsu.edu!dekorte Sun Dec  1 13:06:17 EST 1991
Article 1715 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!paladin.american.edu!darwin.sura.net!gatech!mailer.cc.fsu.edu!sun13!vsserv.scri.fsu.edu!dekorte
>From: dekorte@vsserv.scri.fsu.edu (Stephen L. DeKorte)
Newsgroups: comp.ai.philosophy
Subject: Re: AI-next evolutionary stage
Keywords: evolution
Message-ID: <5733@sun13.scri.fsu.edu>
Date: 28 Nov 91 07:51:08 GMT
Article-I.D.: sun13.5733
Sender: news@sun13.scri.fsu.edu
Followup-To: comp.ai.philosophy
Organization: SCRI, Florida State University
Lines: 10

What reason would a 'superior' intelegence (machine or otherwise)
have to exterminate a 'lesser' one? 
In the case of AI, why would we give (at least intentionaly) these
machines the instincts that would support that sort of behavior?
(well, at least ignoring the possibility that such machines might 
be developed/produced under defence contracts, or that we might
try to create them too much so in our own image.) 


Steve D. (dekorte@ibm4.scri.fsu.edu)


