From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:50:05 EDT 1992
Article 5522 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: AI failures
Organization: Department of Psychology, University of Toronto
References: <uc2m8INNn5d@early-bird.think.com> <1992May8.155052.13848@psych.toronto.edu> <uetinINNco5@early-bird.think.com>
Message-ID: <1992May10.003028.19333@psych.toronto.edu>
Date: Sun, 10 May 1992 00:30:28 GMT

In article <uetinINNco5@early-bird.think.com> moravec@Think.COM (Hans Moravec) writes:
>In article <1992May8.155052.13848@psych.toronto.edu>, michael@psych.toronto.edu (Michael Gemar) writes:
>|> moravec@Think.COM (Hans Moravec) writes:
> h> Soon after (AIs) are possible at all, they will be super plentiful
> h> ... it will be necessary to throw them away ...
> h> Otherwise the world will be up to its armpits in
> h> useless individuals
>
>|> Presumably similar arguments could have been made when slavery existed.
>|> "There are just too damn many of them - it is absolutely necessary to
>|> kill them when they're no longer needed.  Otherwise..."  For that
>|> matter, I see no reason why the same argument would not apply to 
>|> overpopulation in Third World countries.
>|> If you are going to adopt such a position for the sake of expediency, 
>|> you should realize just how radical the ethical implications are.
>|> I personally think that such a position is indefensible on *any* 
>|> ground other than sheer expedience, which is of course no *moral*
>|> reason at all.
>
>     Expediency and morality both arise from necessity, and are much less
>     different than you imagine.

Not according to most moral philosophers that *I've* read.  Which ones
are *you* referring to?

>     Morality is a name for a sytem of social conventions, that modulates 
>     behavioral predispositions of individuals (and groups) in a way that 
>     (ideally) improves the group's well being.  I don't kill or steal from
>     my neighbor (even when I may be able to get away with it), and my
>     neighbor returns the courtesy, and we both benefit from the mutual
>     consideration, and so does the neighborhood.  When the physical ground
>     rules of existence change, though, so will the behaviors that
>     produce the maximum collective benefit.

True perhaps for some naive Utilitarian positions, but hardly the case for
Kantian ethics.

>     You imply that killing is somehow prohibited today.  There are many
>     situations (self defense, war, capital punshment, abortion) where
>     present society condones elimination of individuals that are judged
>     to have a negative  net worth. 

You imply that societal decisions will always be moral ones.  What about
South Africa?  What about slavery in the U.S.?

> There are many other situations
>     (termination of medical or rescue efforts, limits on resources
>     expended in safety precautions, hazardous employment, limiting
>     immigration from third-world countries) where an increased chance
>     of death is condoned.  It would be absolutely necessary to create
>     many more such situations if individuals didn't have the good grace
>     to die of their own accord of old age.
>     In fact, it will be.
>
> h> I can see the same thing happening in real life. 
> h> Putting an AI program into inactive "suspended animation" is
> h> surely ok.
>
>|> Even if it doesn't want to go?  How would *you* feel if your
>|> employer said, "Well, Hans, we don't need you now, so we're going
>|> to put you to sleep for an indefinite period."? 
>
>     If my employer were also my creator, and my sole means of support
>     (I resided in my employer's body, as it were), then I am a component
>     of my employer, and it is my employer's business how much, if any, of
>     its limited resources it should grant me.
>     If I were a self-supporting entity, then it would be a matter for
>     negotiation.  As it is in today's world.

Well, your parents were *your* creator, and presumably until you were
about 18 your sole means of support.  Does this mean that infanticide
is "moral"?

>
> h> But then there will come a time when storage space is
> h> low, and someone notices that the file Moriarity.ai is taking
> h> up 10 terabytes, and hasn't been accessed in five years.
>
>|> ...or that Hans Moravecs's cryogenic sleeper is taking up space, and
>|> he hasn't been needed in five years... 
>|> So, after broadcasting "does anyone need Moravec?" and receiving no
>|> positive responses, the sleeper manager dumps out the corpse.  Maybe
>|> some useful organs are scavenged.  Just good housekeeping...
>
>     If my estate hasn't been paying the rent, then this is exactly the
>     right answer.

And so presumably those who are using heart-lung machines or dialysis
machines who go into arrears can *morally* be unplugged...


>  In fact, it's exactly what happens today, to people
>     in cryonic suspension, and also to people who need impossibly expensive
>     medical procedures to continue to live. 
>
> h>  Some day human minds may be copied as easily as AIs, a process
> h>  When we grow new minds as easily as our bodies grow new cells, we must
> h>  also be prepared to destroy old minds as our bodies destroy old cells.
> h>  The alternative is suffocation.
>
>|> And when we can grow bodies as easily as we grow new cells, the same
>|> would also apply, I suppose.
>|> I find such speculation yet another indication that AI folks don't
>|> *really* think that what their doing is creating *REAL* minds, entities
>|> that are equivalent to humans mentally.  If they did, I don't see how
>|> they could possibly suggest such things as the above...
>|>                    - michael
>
>     And I find you comments childishly naive.  Placing an effectively
>     infinite value on a self-aware entity's existence is a convenient
>     counterbalancing fiction in a world where tribally-forged instincts
>     are to value a stranger's life very little. 

"Tribally-forged instincts"?!  You've been reading too much sociobiology.  
And whatever our *instincts* may (or may not) be, this tells me *nothing*
about what our morality should be - unless you believe that it should
simply follow our instincts, which, in other words, denies a place for
morality at all.

>   It is a fiction that
>     can be maintained most of the time because people die anyway, so
>     it's usually easiest to just wait, and because maintaining even an
>     unproductive person is relatively cheap, and there are not too many
>     of them.  This fiction will break down when individuals become
>     potentially immortal, or when they can be reproduced (reproduce
>     themselves) cheaply in quantity. 
>       Biological evolution solved the problem of providing room for new
>     (sometimes improved) individuals by giving us a prearranged death
>     by  old age.  If we change the rules, we will have to provide a
>     substitute solution, because the problem will remain. 

Well, Hans, the solution we use with *people* now is simply to *not
produce them*.  This is the suggested method for dealing with the
problems of the Third World, rather than letting people
overpopulate and then starve.  I see no reason why the production of
artificial people should not be governed by the same moral code.

I am still interested in hearing from other AI supporters.  Do you
all agree with Hans that morality is simply not a problem for
artificial minds?  


- michael




