Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!demon!news
From: angus@aegypt.demon.co.uk (Angus McIntyre)
Subject: Re: I kill therefore I am
Message-ID: <AAE703CA966891EB@aegypt.demon.co.uk>
Sender: news@demon.co.uk (Usenet Administration)
Nntp-Posting-Host: aegypt.demon.co.uk
Organization: Rev'd Jack's Roamin' Cadillac Church
X-Newsreader: NewsHopper (Chocolate 1.0b17)
References: <39jme8$72p@beta.qmw.ac.uk> <39lg0b$k3s@newsbf01.news.aol.com>
Date: Wed, 9 Nov 1994 22:55:06 GMT
Lines: 66

In article <39lg0b$k3s@newsbf01.news.aol.com>,
rahill@aol.com (RAHill) wrote:

>In article <39jme8$72p@beta.qmw.ac.uk>, spencer2@vaxb.mdx.ac.uk writes:
>
>>Any what I wanted to know was your idea on, how would you incorparate a
>>safty system so that its wasn't able to commit murder.
>
>The inability to commit murder implies, an ability to determine if an
>object is 'human'.  If you think about it, that's a really tough problem
>for a computer.

I would have thought that an *inability* to determine if an object was
human would be more likely to lead to murder, or at least what is
known over here as 'manslaughter' ("Oh, I'm sorry - I didn't realise
s/he was sentient ..."). If you assume that 'murder' specifically
must involve intentionality (you don't often read, for example, "The
runaway truck rushed down the hill and murdered the occupants of the
houses at the bottom"), you have a nice basis for a particularly
macabre version of the Turing Test - if the machine believes that it
has committed murder, then it may well be intelligent, if humans
believe that the machine has committed murder then it has been
*accepted* as being intelligent.

As to the question of how to prevent machines from committing murder,
as a programmer and a computer user I'd feel very wary about a
machine that was simply 'programmed' not to commit murder:

    (not (shalt (kill thou ?X)))
    
Even leaving aside the likelihood of software or hardware glitches, I
would have thought that preventing a particular behaviour 'by design'
in a system which operates in a complex environment would be next to
impossible. There will always be cases you haven't considered.

I suspect that the best way is to give them something to lose if they
go against the norms of acceptable behaviour. You might present the
threat of a grand sanction such as the death penalty (the difference
between machines and humans is that to execute a machine you turn the
electricity OFF!) or, more practically, engineer things in such a way
that deviant behaviour produces feelings of 'tension', reduces the
rewards received by the system etc.
    
I think your best defence against artificial homicide would be a
prohibition against killing that forms part of a consistent and
learned set of beliefs which the machine acquires through social
interaction with its human and non-human peers. Granted, this isn't
perfect in humans, but I think that a layered set of interconnected
behaviours is likely to be more robust than some absolute 'fiat'
imposed by the programmer. Morality is evolutionary.

Maybe this belongs more on comp.ai.philosophy ... or perhaps on
comp.ai.handwaving ...

                            A

                                                


x--------------------------------------------------------------------x
|   angus@aegypt.demon.co.uk    http://www.tardis.ed.ac.uk/~angus/   |
|--------------------------------------------------------------------|
|   "I am here by the will of the people ... and I will not leave    |
|    until I get my raincoat back." ['Metrophage', Richard Kadrey]   |
x--------------------------------------------------------------------x

