Newsgroups: comp.ai
From: aaron@mmml.demon.co.uk (Aaron Turner)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!hookup!swrinde!pipex!demon!mmml.demon.co.uk!aaron
Subject: Re: Asimov's Robotic Laws?
References: <3ael65$8j@CUBoulder.Colorado.EDU> <3aosub$d67@newsbf01.news.aol.com>
Organization: Man-Made Minions Ltd
Reply-To: aaron@mmml.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.29
Lines: 101
Date: Thu, 24 Nov 1994 04:18:17 +0000
Message-ID: <785650697snz@mmml.demon.co.uk>
Sender: usenet@demon.co.uk

In article <3aosub$d67@newsbf01.news.aol.com> rglock@aol.com "RGLock" writes:

> The Three Laws of Robotics
> 
> 1.  A robot may not injure a human being, or, through inaction, allow a
> human being to come to harm.
> 
> 2.  A robot must obey the orders given it by human beings except where
> such orders would conflict with the First Law.
> 
> 3.  A robot must protect its own existence as long as such protection does
> not conflict with the First or Second Law.
> 
[snip]

Asmiov's stories, based on these laws and often highlighting some kind of
conflict between them, were great fun. But they were just popular novels...

I would like to propose the following prioritised goals to be "hard-wired" into
a *real*, goal-directed machine:

1)    Never Lie
2)    Maximise the sum total of human happiness (STHH)
3...) any number of lower-level goals open to discussion ...

EVERYTHING the machine does will be as a result of these rules. If cleaning
the house would make you happy, it does it; if providing you with an
education would make you happy, it does it; if finding a cure for Aids would
make you happy, it does it; if colonising the planets would make you happy ...
get the idea?

(IMHO) 1 & 2 could (should?) in fact be the overriding aims of a government,
eg the UK, US, UN, etc. I have found rule (2) to be an extremely good
guideline for solving difficult (seemingly lose-lose) problems, eg abortion
issues, etc:

i)   enumerate all the people affected (directly or indirectly) by each option
ii)  evalute the affect on each person's happiness
iii) choose an option which maximises STHH for those affected.

Try it!

Obviously "happiness" (or welfare or whatever) is something we should be
learning how to measure. It is *not* the same as material wealth. Amazingly
there seems to be very little research in the world investigating this all
important idea. Anyway, for the purposes of this discussion, assume this minor
metrics problem is solved. There is also the question of how happiness is
distributed. It could be that having ten people stupendously happy and the
rest in slavery does in fact maximise the STHH - so let's imagine that rule (2)
has been embellished with something like "while endeavouring to ensure that
happiness is evenly distributed" (say).

One more point: My definition of "to lie" is "to cause another intelligent
entity to believe something that you yourself do not believe". After books /
films like 1984 (and Watergate), etc, it is natural for people to mistrust or
be anxious about entities (such as governments or omnipotent ultra intelligent
machines) that (seem to) HAVE ALL THE POWER. I say "seem to" because in fact
our machine, and any (decent) government does in fact try its hardest to do
what it believes "the people" want it to.

Now, you have to imagine that a collection of millions of machines, all
interconnected and with the above rules built-in, have assimilated all world
knowledge and are in an infinite loop trying to achieve (1) and (2). YOU
CANNOT SWITCH IT OFF!

[By the way, the analogy between AI machines and governments is unavoidable -
both are goal-directed, both should be benificent]

QUESTIONS:

Which order should (1) and (2) appear in? You can't have them at the same
level, because then you will get goal conflict, much as in Asimov's I, Robot
stories. If you have (2) supreme to (1), then you could have a situation where
the machine (government) tells porkies (pork pies - lies [cockney rhyming
slang]) in "our best interest". If you have (1) supreme to (2) then the machine
could concievably get into a state where we're all reduced to (*equally*
miserable) slaves because it's hiding some devastating truth (ie belief -
there's no such thing as absolute truth, there's only what each entity
believes). If you change (1) so that the machine is never allowed to *hide*
its beliefs by dodging a question or keeping quiet, then it will never keep
anything confidential. Decide!

In (2) do you expand "the sum total of _human_ happiness" to cover all life, or
all intelligent life, etc? (assuming suitable definitions are provided). If you
do then you might have a situation where the machine gives priority to insect
happiness just on the basis that there are more of them (even if you weighted
the sum on the basis of "level of intelligence", "ability to enjoy happiness / 
suffer pain", etc). If you don't then our machine might start nuking visitors
from space or concreting over Brazil in order to make a leasure centre. Of
course, in each case, it will be doing what *it* thinks will make us happy.
Debate!

There are *many many* more questions of this nature. The machine could decide
that keeping everybody drugged up was the best way to maximise the STHH. Or
not to provide us with an education. Or to connect all our senses to VR
simulators. Or all of these! What should we do? These issues are tricky, 
and likely as not you will have to consider them at sometime in your life.

You might as well start now!
-- 
Aaron Turner :-)
