From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!zaphod.mps.ohio-state.edu!think.com!Think.COM!moravec Tue May 12 15:49:31 EDT 1992
Article 5464 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!zaphod.mps.ohio-state.edu!think.com!Think.COM!moravec
>From: moravec@Think.COM (Hans Moravec)
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Message-ID: <uc2m8INNn5d@early-bird.think.com>
Date: 7 May 92 20:07:36 GMT
References: <1992May1.193141.24350@psych.toronto.edu> <zlsiida.144@fs1.mcc.ac.uk> <1992May7.152447.7930@waikato.ac.nz> <727@ckgp.UUCP>
Organization: Thinking Machines Corporation, Cambridge MA, USA
Lines: 43
NNTP-Posting-Host: turing.think.com

Soon after they're possible at all, AIs will be so cheap and plentiful
(after all, they can be reproduced by file copy command, and all
operating copies will soon be unique individuals, because they
are modified by their experiences),  that it will be absolutely 
necessary to throw them away when they're no longer needed.  Easy
come, easy go.   Otherwise the world will be up to its armpits in
self-aware and intelligent but (because of quirks of their makeup)
useless individuals who claim a right to exist, at the expense of more
useful processes.

A few years ago the question became the theme for a script in the
new Star Trek.  The ship computer (which is intelligent, but not
accorded human rights beause it is not "sentient" unlike the
android Data (a totally bogus distinction, in my opinion)), was
asked to make a holodeck simulation of a Sherlock Holmes story.
It did such a good job that its simulation of the character Dr.
Moriarity was so fleshed out, that Dr. M acquired self-awareness and
free will, and started exploring the ship's control system instead
of playing in the story.  Its existence was incompatible with
the operation of the ship, but (by the maudlin sentimentality
of the series) it had graduated to personhood, and so could not
be simply "killed".  The dilemma was resolved by putting Dr. M.
in ship's memory, inactive (perhaps to be revived for a future
script).   Out of sight, out of mind.

I can see the same thing happening in real life. 
Putting an AI program into inactive "suspended animation" is
surely ok. But then there will come a time when storage space is
low, and someone notices that the file Moriarity.ai is taking
up 10 terabytes, and hasn't been accessed in five years.  So,
after broadcasting "does anyone need Moriarity.ai?" and receiving
no positive responses, the system manager "rm"s the file.  Maybe
some unique good parts are scavenged.  Just good housekeeping.

Some day human minds may be copied as easily as AIs, a process
that would have many benefits.  The same economics of existence
that regulates AIs would then apply to human minds.  When we
grow new minds as easily as our bodies grow new cells, then we must
also be prepared to destroy old minds as our bodies destroy old cells.
The alternative is suffocation.

	Hans Moravec, Carnegie Mellon University (moravec@cmu.edu)
		on sabbatical at Thinking Machines Co. (moravec@think.com)


