From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sol.ctr.columbia.edu!src.honeywell.com!saifr00.cfsat.honeywell.com!petersow Mon May 25 14:05:25 EDT 1992
Article 5654 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sol.ctr.columbia.edu!src.honeywell.com!saifr00.cfsat.honeywell.com!petersow
>From: petersow@saifr00.cfsat.honeywell.com (Wayne Peterson)
Subject: Re: AI failures
Message-ID: <1992May14.162548.13415@saifr00.cfsat.honeywell.com>
Organization: Honeywell Air Transport Systems Division
References: <1992May13.044532.3389@ccu.umanitoba.ca> <1992May13.164932.9954@saifr00.cfsat.honeywell.com> <1992May14.030934.22659@ccu.umanitoba.ca>
Date: Thu, 14 May 92 16:25:48 GMT
Lines: 71

WP = Wayne Peterson
AZ = Anton Zirnum

WP
>>Death seems to be a sure thing.

AZ
>Seems to be, BUT is not! Statistics are on my side,
>a great percentage of the people that where ever alive
>are alive today! (I think it's even greater than 50%)
>The world pop. is about 5 Billion, and climbing, death
>has yet to catch up.

WP:
I had heard more like 5% of all people who had ever lived, are
alive today.  Of course this comes from the usual wild
extrapolation of scientist.  How do they know how many people
who have ever lived on earth.  But this is beside the point.
Anton I hate to be the bearer of bad news, but you will die, and
it could happen any day.

WP>
>>What do we compromise?  What do we give up? For what ends?
>>Should we think ahead to death?

AZ
>Since we don't know the moment of death, or the possibilities
>if we prolong it, it is pretty meaningless to look ahead to
>death.

WP
Quite the contrary, since I will die (perhaps having cancer makes this
event seem more real), and since  everything here will have no
meaning to me at that time (not even my wife), it seems much more
meaningful for me to prepare for that day.  I dont pretend
to know what happens.  There is either an afterlife or not.  It not
then I will cease to exist and nothing matters. If so maybe I will
get to be in the company of great sages like Socrates and Kabir.
Meanwhile I will continue to try to develop my consciousness, for
if anything continues it would have to be that.  The grimmest idea
to me would be coming back to this world again, now that is a 
depressing thought.

I have a simple morality.  It is that I am responsible for my own
actions, not society, not my parents, not government, not the law.
That does not mean that I can control my own mind, that is a 
constant struggle with the passions and with ignorance.  But I
believe that I have put myself here, and with God's help I
must get myself out.

What does all of this have to do with AI?  Everything.  Where does
the responsibility of the intelligent machine lie?  If a machine
decides that 5 billion people on earth is too much, it should be
more like 5 million, is the machine responsible for the effect of
that decision.  Could machines become a convenient way for us to
escape responsiblity, just as government and religion have been in
the past.  How could you make a machine responsible?  Will it value
humans? Will it value all humans equally? Will it have power.  If
this is premature.  Look at the authorizer and credit assistants at
American Express.  They make decisions on approving card members and
authorizations.  Can these programs descriminate against blacks,
women, aids patients?  How about decisions by neural nets?  Can we
hide behind them and escape responsibility.  If an expert system fails,
is the responsibility on the expert or the programmer (excuse me, knowledge
engineer)?  Does an intelligent machine need to compromise?

Regards,
Wayne Peterson
"Be here, now" ... Baba Ram Das




