From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:04:45 EDT 1992
Article 5581 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: AI failures
Organization: Department of Psychology, University of Toronto
References: <uetinINNco5@early-bird.think.com> <1992May10.003028.19333@psych.toronto.edu> <umpm0INNpv8@early-bird.think.com>
Message-ID: <1992May12.154318.1212@psych.toronto.edu>
Date: Tue, 12 May 1992 15:43:18 GMT

Once more into the breech...  
(My apologies for the extended quotations)


In article <umpm0INNpv8@early-bird.think.com> moravec@Think.COM (Hans Moravec) writes:
>In article <1992May10.003028.19333@psych.toronto.edu>, michael@psych.toronto.edu (Michael Gemar) writes:
>|> 
>|> Well, Hans, the solution we use with *people* now is simply to *not
>|> produce them*.  This is the suggested method for dealing with the
>|> problems of the Third World, rather than letting people
>|> overpopulate and then starve.  I see no reason why the production of
>|> artificial people should not be governed by the same moral code.
>|> 
>
>	So let's see if I have this proposal right: In the privacy of
>my own bedroom, in the year 2090 (biotech and cyborg advances, you
>know), I'm sitting in front of my new 10^16 MIPS workstation, which
>has enough power and storage to implement and run simultaneously a
>billion human-class AIs.
>	But there are certain types of program I may simply
>may not write or run, or if run, may not be stop (or even suspend
>for a while?  Run in background?). 

Yes.

>	How do you propose this be enforced: something like
>William Gibson's "Turing Police" conducting spot raids and checking my
>code?  And if they catch me, will they have to take me to court to
>prosecute for attempted unlawful person creation?  And how will they
>demonstrate my code falls under the statute: a Turing test?

Enforcement is a separate issue.  I am not talking about legality,
I am talking about morality.

>  In the
>intrusive environment you propose, I'd be stupid to make my programs
>very verbal.  If they were, for instance, downloads of my own mind
>(those cyborg implants make it easy), I would hack them to be
>obsessive/autistic/idiot-savant sorts, interested in doing nothing but
>using their (my) full brain power to work on the problems I wanted them
>to (I know it's possible: for brief, glorious periods I sometimes get
>that way now.  With the AIs I'd lock in that state of mind).  After
>they're done they would go into a complete autistic withdrawal.

Hmm...we can always use a few more strong backs, too.  Why not genetically
engineer *humans* to be like this?  Like _Brave New World_?

>Let the fuzz try to elicit any incriminating dialog.

"Youse can't prove nuttin', copper."  I guess if the police can't catch you,
then an action is morally right, eh?                         

>  I guess they'll
>have to call in the expert code examiners to rule whether my program
>is a crypto-AI deep inside.  Then they'll throw me in jail, and the
>state will have to support a billion vegetative AIs.  Brilliant.
>	Or maybe, my neighbor, not a hardened, immoral criminal
>like myself, writes a program to do something or other, but it gets
>into loops sometimes.  So she writes another module that reflects on
>the original part of the program and detects the loops, and generates
>conditioning signals that modulate the branching probabilities of key 
>points, and another that evaluates the whole process, and makes higher
>order adjustments.  And one thing leads to another, and the program
>is getting pretty thoughtful about itself, and even external factors.
>She has a million invokations of this process running in her machine
>(its overall complexity is 1000x human).  The Turing Police make a
>spot check on her machine when they've shipped me off, and, looking
>at her code, find enough grounds for suspicion to take her in too.
>The experts then rule on her code, and its a tough 4:3 split decision,
>but after opposing opinions are presented, in the slammer for her too,
>and her accidental AIs also become wards of the state.  Great system.

"Intent" is an established concept in our current legal system.  Just
as manslaughter is not murder, and does not carry the same penalties,
so presumably would there be a difference between intentionally creating
and killing AI's and doing so accidentally. 

>	Michael's nutty moral position makes me think that there
                  ^^^^^
Come come, Hans, I'm trying to be civil, the least you can do is the same.
In any case, I get the feeling that I am not alone in my stand, and that
*yours* may be the minority view, for whatever that's worth.

>could be some merit in Searle's equally ridiculous distinction between
>real minds and a "simulated" minds, because they sort of cancel each other
>out.  Ok, Michael, I'll concede (as a convenient legal fiction) that
>the AIs are not real minds, just simulated minds.  Now there's no
>moral dilemma in your humanist framework.
                       ^^^^^^^^

When has "humanist" become an insult?  (Was it around the same time this
happened to "liberal"?)

>(Secretly, of course,  when Michael's not listening, I'll maintain my
> materialist world view, and see of my AIs  as being as real as myself,
> but subject to the harsh realities of their particular mode of
> existence: as indeed am I.)

Hans, you don't have to be secretive about this, since *I'm* a materialist,
and agree that if AI's have minds like your and mine, they should be
subject to the realities of their mode of existence as indeed you are.  
Where we differ is in what the acceptable actions *within* that mode of
existence are.

>Regarding the moral use and disposal of AIs in various circumstances,
>   Subject: Morality and artificial minds
>   Message-ID: <1992May11.170615.44727@spss.com>
>markrose@spss.com (Mark Rosenfelder) writes:
>
>> Fortunately there's a simple solution to many of these problems.
>> *Ask them.*  Recent history should make it clear that making unilateral
>> judgments about the rights and desires of another group is highly immoral.
>> Let the AIs tell us the answers to all these questions!
>
>When I make an AI, I will be sure to construct it so it wants,
>passionately, to do only and exactly what I want it to do.
>So if you *ask it*, it will say just that.
>	Like the "dish of the day" in the Hitchhiker's guide
>to the galaxy: The perfect solution to the moral dilemma of having
>to kill and eat other entities to remain alive oneself:
>	A beast bred carefully over genrations to WANT to be
>eaten; moreover intelligent and highly articulate, so it can be
>brought to your table before being cooked to explain in its own
>words, carefully and convincingly, that its whole purpose in life
>is to serve as your dinner: it's been fattening itself up for months
>to make its liver especially tender just for you.  So when it goes to
>the kitchen to kill itself, you need have no guilt whatsoever about
>eating it later.
>


Goodness, Hans, I made the same suggestion and referred to the same
passage in a posting that *I* made.  Great minds do think alike...

However, I want to know if you feel such an action in the case of
cows *is* moral.  Heck, let's make it really black and white, and
say that a race of technologically advanced cannibals have genetically
engineers a race of *humans* to want to be eaten.  Is *this* "right"?
*I* sure have problems with this...but I guess you figured that already.


I think that our disagreement is not so much over ethics involving
AI's as it is over ethics in general.  If so, so be it - but it is 
perhaps no longer appropriate for us to be discussing it here in
comp.philosophy.ai.


- michael
 



