Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!howland.reston.ans.net!news.nic.surfnet.nl!sun4nl!cs.vu.nl!embronne
From: embronne@cs.vu.nl (Bronneberg EM)
Subject: Re: Opinions about A.I. in society
Nntp-Posting-Host: galjoen.cs.vu.nl
References: <3oaor7$kaj@host.di.fct.unl.pt>
Sender: news@cs.vu.nl
Organization: Fac. Wiskunde & Informatica, VU, Amsterdam
Date: Thu, 4 May 1995 18:15:29 GMT
X-Newsreader: TIN [version 1.2 PL2]
Message-ID: <D82G31.D0G.0.-s@cs.vu.nl>
Lines: 64

Paulo JML Pinto - Aluno Eng. Informatica (pjmlp@students.fct.unl.pt) wrote:
:     I would like to get some opinions about using A.I. in society, for
: example :

:     a) If we succed in creating a machine, a robot for example, and if he is
: capable of thinking and acting like a human, what are his civil rights,
: could we give them to him ?

Off course we could, it's better to ask whether we should. If this machine
would really act like a human, we would have to say that this machine had
feelings, just like us, since there is no way of proving the opposite and the
only reason for thinking that other humans have feelings is that we know we
have them ourselves. For this reason I think we would have to give them the
same rights as humans.

:     b) If such a robot is created and has civil rights and for some obscure
: reason goes nuts and kills someone, who should we blame ? The programmer,
: the hardware maker, the person who made him kill someone or the society ?

For some reason or another you left out one possibility: the machine. A
machine like this would most probably have to be educated and even `brought
up' by people or other machines. This could go wrong and turn the machine
into a psychopath, but we usually don't punish parents of murderers, but the
murderer. So perhaps we should punish the machine.
Another option is that there is an error in the hardware or software. If this
is a fundamental (occurring because of the production-process or a mistake in
the design), the `builders' would have to be punished. If this error would
occur because of some accident, we would have the same problem as with
mentally ill people, mostly we punish them, so in this case we would have to
punish the machine.
In the case that some person or machine made the machine kill someone, the
same options as above apply to the killer, the person or machine that made it
kill would be accomplice.

:     c) What about using A.I. systems (neural networks, expert systems, etc)
: to replace humans in their work, such as doctors?

Perhaps this is even more difficult. A `simple' expert system will not be
considered to have feelings, or be intelligent. Therefor you can never blame
the system if it would make a mistake. So now it's a real problem who's to
blame. You can't say the programmer has to be punished because he did not
deliver a system that doesn't make any mistakes. Human doctors make mistakes,
so expert-systems (or neural nets or whatever) will make mistakes, too. It
would also not be very helpful to punish the system for carelessness. So all
we have left to punish is the hospital. But a hospital doesn't want to be
punished for having a system that isn't 100% perfect, so they probably will
never let it completely take over from a doctor. Doctors are human and
therefore they are allowed to make a few mistakes and the hospital won't be
punished for having a doctor who isn't 100% perfect.
Concerning the aspect of employment: perhaps machines can do a lot of work
that humans don't want to do, so let them do it. You just have to make sure
that everybody who wants to work will be able to (this isn't the case right
now, I guess, but anyway... :-) and that everybody who doesn't have a job,
does have a nice income. But that sounds too much like a utopia to really
happen, so this is a pretty complicated subject, too, since machines are much
cheaper employees than human-beings.

:     Portugal,
:     Paulo Pinto

:     pjmlp@students.fct.unl.pt

Greetings from Miel Bronneberg,
embronne@cs.vu.nl
