From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!think.com!Think.COM!moravec Tue May 12 15:50:28 EDT 1992
Article 5564 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!think.com!Think.COM!moravec
>From: moravec@Think.COM (Hans Moravec)
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Date: 11 May 1992 21:41:20 GMT
Organization: Thinking Machines Corporation, Cambridge MA, USA
Lines: 99
Distribution: world
Message-ID: <umpm0INNpv8@early-bird.think.com>
References: <uc2m8INNn5d@early-bird.think.com> <1992May8.155052.13848@psych.toronto.edu> <uetinINNco5@early-bird.think.com> <1992May10.003028.19333@psych.toronto.edu>
NNTP-Posting-Host: turing.think.com

In article <1992May10.003028.19333@psych.toronto.edu>, michael@psych.toronto.edu (Michael Gemar) writes:
|> 
|> Well, Hans, the solution we use with *people* now is simply to *not
|> produce them*.  This is the suggested method for dealing with the
|> problems of the Third World, rather than letting people
|> overpopulate and then starve.  I see no reason why the production of
|> artificial people should not be governed by the same moral code.
|> 

	So let's see if I have this proposal right: In the privacy of
my own bedroom, in the year 2090 (biotech and cyborg advances, you
know), I'm sitting in front of my new 10^16 MIPS workstation, which
has enough power and storage to implement and run simultaneously a
billion human-class AIs.
	But there are certain types of program I may simply
may not write or run, or if run, may not be stop (or even suspend
for a while?  Run in background?).
	How do you propose this be enforced: something like
William Gibson's "Turing Police" conducting spot raids and checking my
code?  And if they catch me, will they have to take me to court to
prosecute for attempted unlawful person creation?  And how will they
demonstrate my code falls under the statute: a Turing test?  In the
intrusive environment you propose, I'd be stupid to make my programs
very verbal.  If they were, for instance, downloads of my own mind
(those cyborg implants make it easy), I would hack them to be
obsessive/autistic/idiot-savant sorts, interested in doing nothing but
using their (my) full brain power to work on the problems I wanted them
to (I know it's possible: for brief, glorious periods I sometimes get
that way now.  With the AIs I'd lock in that state of mind).  After
they're done they would go into a complete autistic withdrawal.
Let the fuzz try to elicit any incriminating dialog.  I guess they'll
have to call in the expert code examiners to rule whether my program
is a crypto-AI deep inside.  Then they'll throw me in jail, and the
state will have to support a billion vegetative AIs.  Brilliant.
	Or maybe, my neighbor, not a hardened, immoral criminal
like myself, writes a program to do something or other, but it gets
into loops sometimes.  So she writes another module that reflects on
the original part of the program and detects the loops, and generates
conditioning signals that modulate the branching probabilities of key 
points, and another that evaluates the whole process, and makes higher
order adjustments.  And one thing leads to another, and the program
is getting pretty thoughtful about itself, and even external factors.
She has a million invokations of this process running in her machine
(its overall complexity is 1000x human).  The Turing Police make a
spot check on her machine when they've shipped me off, and, looking
at her code, find enough grounds for suspicion to take her in too.
The experts then rule on her code, and its a tough 4:3 split decision,
but after opposing opinions are presented, in the slammer for her too,
and her accidental AIs also become wards of the state.  Great system.

	Michael's nutty moral position makes me think that there
could be some merit in Searle's equally ridiculous distinction between
real minds and a "simulated" minds, because they sort of cancel each other
out.  Ok, Michael, I'll concede (as a convenient legal fiction) that
the AIs are not real minds, just simulated minds.  Now there's no
moral dilemma in your humanist framework.

(Secretly, of course,  when Michael's not listening, I'll maintain my
 materialist world view, and see of my AIs  as being as real as myself,
 but subject to the harsh realities of their particular mode of
 existence: as indeed am I.)


Regarding the moral use and disposal of AIs in various circumstances,
   Subject: Morality and artificial minds
   Message-ID: <1992May11.170615.44727@spss.com>
markrose@spss.com (Mark Rosenfelder) writes:

> Fortunately there's a simple solution to many of these problems.
> *Ask them.*  Recent history should make it clear that making unilateral
> judgments about the rights and desires of another group is highly immoral.
> Let the AIs tell us the answers to all these questions!

When I make an AI, I will be sure to construct it so it wants,
passionately, to do only and exactly what I want it to do.
So if you *ask it*, it will say just that.
	Like the "dish of the day" in the Hitchhiker's guide
to the galaxy: The perfect solution to the moral dilemma of having
to kill and eat other entities to remain alive oneself:
	A beast bred carefully over genrations to WANT to be
eaten; moreover intelligent and highly articulate, so it can be
brought to your table before being cooked to explain in its own
words, carefully and convincingly, that its whole purpose in life
is to serve as your dinner: it's been fattening itself up for months
to make its liver especially tender just for you.  So when it goes to
the kitchen to kill itself, you need have no guilt whatsoever about
eating it later.

			-- Hans

P.S. Bon apetit










