From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum Mon May 25 14:05:57 EDT 1992
Article 5713 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum
>From: zirdum@ccu.umanitoba.ca (Antun Zirdum)
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Message-ID: <1992May18.054633.14656@ccu.umanitoba.ca>
Date: 18 May 92 05:46:33 GMT
References: <1992May13.164932.9954@saifr00.cfsat.honeywell.com> <1992May14.030934.22659@ccu.umanitoba.ca> <1992May14.162548.13415@saifr00.cfsat.honeywell.com>
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Lines: 113

In article <1992May14.162548.13415@saifr00.cfsat.honeywell.com> petersow@saifr00.cfsat.honeywell.com (Wayne Peterson) writes:
>WP = Wayne Peterson
>AZ = Anton Zirnum
>
>WP
>>>Death seems to be a sure thing.
>
>Anton I hate to be the bearer of bad news, but you will die, and
>it could happen any day.
>
(for PHIL-AI followers, you may skip the next paragraph of
mad ramblings)
We are getting slightly off topic, - but - I KNOW! I will
die, but I have no problem with that. As a matter of fact,
every minute that you live another little part of you dies.
Are we the same person from day to day, or do we change, and
if we do change *where* does the person from before go?
Do you ever forget things? Do you ever hurt yourself physically?
	I have no problems with the MY death as I view myself
as being a part of a world community, frankely, it's just a
drop in the bucket. The world continues without me, and everything
that I could have experienced or discovered, will be experienced
and discovered in due time (without my help).

>>Since we don't know the moment of death, or the possibilities
>>if we prolong it, it is pretty meaningless to look ahead to
>>death.
>
>WP
>Quite the contrary, since I will die (perhaps having cancer makes this
>event seem more real), and since  everything here will have no
>meaning to me at that time (not even my wife), it seems much more

I am sad to hear that you have cancer. (I have been
close to death myself, and it certainly does seem a
whole lot more real to me than other people I met!)

>meaningful for me to prepare for that day.  I dont pretend
>to know what happens.  There is either an afterlife or not.  It not

What I meant was that, since you cannot know the moment
of your own death (unless you choose it, and not even that
is a sure thing!) you are much better hedging your bets
that you will be around a little while longer. For example
If I have good reason to believe that I will die tomorrow
(and if I had no family), morality goes out the window,
I would want everything now! no matter what stands in the way.
But if I do not die, then I would suffer the consequences of
my actions. (Thus humans being the funny creatures they are
fall on the optimistic side, and even when faced with evidence
of their impending death - tend not to believe it. This of 
course is a built in survival mechanism of evolution,
society could not exist too long without this mechanism!)
>
>
>What does all of this have to do with AI?  Everything.  Where does
>the responsibility of the intelligent machine lie?  If a machine
>decides that 5 billion people on earth is too much, it should be
>more like 5 million, is the machine responsible for the effect of
>that decision.  Could machines become a convenient way for us to
>escape responsiblity, just as government and religion have been in
>the past.  How could you make a machine responsible?  Will it value

Machines are held responsible all the time. If your
car veeres off the road, and kills 10 people walking
on the sidewalk, and if it was determined that the
driver was not at fault (but the car stearing column
was exhausted and broke) then it is obvious that the
car and not the person driving is at fault! The car 
is then either repaired (rehabilitated) or scraped
(executed).
	This is just one of many cases where machines
are held responsible for humanity. It is a fairly
simple matter to extend this to AI programs!

>humans? Will it value all humans equally? Will it have power.  If
>this is premature.  Look at the authorizer and credit assistants at
>American Express.  They make decisions on approving card members and
>authorizations.  Can these programs descriminate against blacks,
>women, aids patients?  How about decisions by neural nets?  Can we
>hide behind them and escape responsibility.  If an expert system fails,
>is the responsibility on the expert or the programmer (excuse me, knowledge
>engineer)?  Does an intelligent machine need to compromise?

To the first two questions - YES. Programs do not
discriminate (the way people do), but AI programs
can and should be given all responsibilities for
their decisions.
	Look at it this way, up to a point -
parents are responsible for their childrens actions.
After the children have proven themselves capable of
handling responsibility consistently, the children
have earned the right (curse?) of responsibility!
Then who should be responsible for an action, is it
the parent (providing the parent attempted to provide
a good environment, and teach responsibility) or
the child? My vote goes for CHILD.
	If a program has been tested to a reasonable
degree (depending on the situation) by its
designers, then IT should be given responsibility!
>
>Regards,
>Wayne Peterson
>"Be here, now" ... Baba Ram Das
>
>


-- 
*****************************************************************
*   AZ    -- zirdum@ccu.umanitoba.ca                            *
*     " The first hundred years are the hardest! " - W. Mizner  *
*****************************************************************


