From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:49:41 EDT 1992
Article 5482 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: AI failures
Organization: Department of Psychology, University of Toronto
References: <1992May7.152447.7930@waikato.ac.nz> <727@ckgp.UUCP> <uc2m8INNn5d@early-bird.think.com>
Message-ID: <1992May8.155052.13848@psych.toronto.edu>
Date: Fri, 8 May 1992 15:50:52 GMT

In article <uc2m8INNn5d@early-bird.think.com> moravec@Think.COM (Hans Moravec) writes:
>Soon after they're possible at all, AIs will be so cheap and plentiful
>(after all, they can be reproduced by file copy command, and all
>operating copies will soon be unique individuals, because they
>are modified by their experiences),  that it will be absolutely 
>necessary to throw them away when they're no longer needed.  Easy
>come, easy go.   Otherwise the world will be up to its armpits in
>self-aware and intelligent but (because of quirks of their makeup)
>useless individuals who claim a right to exist, at the expense of more
>useful processes.

Presumably similar arguments could have been made when slavery existed.
"There are just too damn many of them - it is absolutely necessary to
kill them when they're no longer needed.  Otherwise..."  For that
matter, I see no reason why the same argument would not apply to 
overpopulation in Third World countries.

If you are going to adopt such a position for the sake of expediency, 
you should realize just how radical the ethical implications are.
I personally think that such a position is indefensible on *any* 
ground other than sheer expedience, which is of course no *moral*
reason at all.

>A few years ago the question became the theme for a script in the
>new Star Trek.  The ship computer (which is intelligent, but not
>accorded human rights beause it is not "sentient" unlike the
>android Data (a totally bogus distinction, in my opinion)), was
>asked to make a holodeck simulation of a Sherlock Holmes story.
>It did such a good job that its simulation of the character Dr.
>Moriarity was so fleshed out, that Dr. M acquired self-awareness and
>free will, and started exploring the ship's control system instead
>of playing in the story.  Its existence was incompatible with
>the operation of the ship, but (by the maudlin sentimentality
>of the series) it had graduated to personhood, and so could not
>be simply "killed".  The dilemma was resolved by putting Dr. M.
>in ship's memory, inactive (perhaps to be revived for a future
>script).   Out of sight, out of mind.
>
>I can see the same thing happening in real life. 
>Putting an AI program into inactive "suspended animation" is
>surely ok.

Even if it doesn't want to go?  How would *you* feel if your
employer said, "Well, Hans, we don't need you now, so we're going
to put you to sleep for an indefinite period."? 


> But then there will come a time when storage space is
>low, and someone notices that the file Moriarity.ai is taking
>up 10 terabytes, and hasn't been accessed in five years.

...or that Hans Moravecs's cryogenic sleeper is taking up space, and
he hasn't been needed in five years... 

>  So,
>after broadcasting "does anyone need Moriarity.ai?" and receiving
>no positive responses, the system manager "rm"s the file.  Maybe
>some unique good parts are scavenged.  Just good housekeeping.

So, after broadcasting "does anyone need Moravec?" and receiving no
positive responses, the sleeper manager dumps out the corpse.  Maybe
some useful organs are scavenged.  Just good housekeeping...


>Some day human minds may be copied as easily as AIs, a process
>that would have many benefits.  The same economics of existence
>that regulates AIs would then apply to human minds.  When we
>grow new minds as easily as our bodies grow new cells, then we must
>also be prepared to destroy old minds as our bodies destroy old cells.
>The alternative is suffocation.

And when we can grow bodies as easily as we grow new cells, the same
would also apply, I suppose.


I find such speculation yet another indication that AI folks don't
*really* think that what their doing is creating *REAL* minds, entities
that are equivalent to humans mentally.  If they did, I don't see how
they could possibly suggest such things as the above...

- michael



