From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!richsun!jerry Sun Dec  1 13:06:42 EST 1991
Article 1758 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!richsun!jerry
>From: jerry@cpg.trs.reuter.com (Jerry Marco)
Newsgroups: comp.ai.philosophy
Subject: Re: AI as the Next Stage in Evolution
Message-ID: <2621@richsun.cpg.trs.reuter.com>
Date: 29 Nov 91 23:09:03 GMT
Article-I.D.: richsun.2621
References: <YAMAUCHI.91Nov27024148@indigo.cs.rochester.edu>
Sender: news@richsun.cpg.trs.reuter.com
Organization: Reuters Client Site Systems, Oakbrook,IL
Lines: 53

In article <YAMAUCHI.91Nov27024148@indigo.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>For those who are starting to tire of the "Can machines *really*
>think?" thread, here's a new topic:

THANKS!!!

>
>What do you think of the idea of intelligent machines as the next
>stage in evolution?  It seems that if we ever succeed in building
>machines with human-level intelligence, it will only be a matter of
>time before their capabilities exceed those of humans -- in speed,
>accuracy, and memory capacity at least, and possibly in other ways.
>
>Does the idea of replacing the human species make you uncomfortable?
>Moravec and Jastrow suggest that this is both inevitable and
>desirable, while Weizenbaum reacts to this idea with what might be
>considered unmitigated horror.
>

At first I failed to see "replacement" of human beings by mentally
superior machines as a necessary consequence of the development of
those machines.  After all, humans are (presumably) mentally
superior to a wide variety of other animals, some quite similar
in form and / or habits to ourselves, and yet have "replaced"
very few them, in any meaningful sense.  However...

If one considers history, one will see that it has long been the
practice of human beings to attempt to replace not only other
species (wolves, for instance), but also to replace each other
(consider Europeans vs. Native Americans).  It has been the view
of those humans attempting the replacement that they are superior
to their rivals, and that this superiority gives them the right,
even the responsibility, to do so.

I submit that the true danger is not in machine superiority per se,
but in machines' belief in their own superiority.  If machines are
truly superior, and therefore wield power over us, and if they
believe themselves to be superior, would they not attempt to replace
us, as we have attempted to replace other biological creatures?
And even if this attempt were unsuccessful, would it not still
be as disastrous for us as some of our attempts have been upon
other individuals and species?

We have seen in this newsgroup the question of whether humans would
be ethically justified in destroying machines that were, or might be,
consciously intelligent.  Perhaps the question all along should have
been, if machines correctly believe that they are superior to human
beings, would they be ethically justified in destroying us?
--
Jerry Marco, Manager			jerry@cpg.trs.reuter.com
Reuters Client Site Systems
1400 Kensington Rd
Oak Brook, IL  60521  USA


