Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!usc!nic-nac.CSU.net!charnel.ecst.csuchico.edu!olivea!trib.apple.com!amd!amdahl!netcomsv!netcom.com!marcels
From: marcels@netcom.com (Marcel Schoppers)
Subject: Re: Asimov's Laws of Robotics
Message-ID: <marcelsCw4tCL.HGq@netcom.com>
Sender: Marcel Schoppers
Organization: Netcom Online Communications Services (408-241-9760 login: guest)
References: <1994Sep9.172050.15435@news.uta.edu> <1994Sep10.222618.29202@galileo.cc.rochester.edu> <1994Sep12.025753.27811@news.uta.edu>
Date: Wed, 14 Sep 1994 18:14:45 GMT
Lines: 46

Piotr Gmytrasiewicz wrote:
>
> Greg Stevens wrote:
>
> >>People simply don't optimize based on a probabilistically weighted
> >>sum of expected outcomes [...]
>
> This does not mean, however, that we cannot formulate a theory of how
> rational decisions should be made.  By robots or humans alike...

In an offline discussion I've been having with Philippe Morignot, Jan-Eric
Larsson, and Dan Shapiro, it was observed that Asimov's Laws were not meant
to make robots autonomously intelligent; on the contrary, they were meant
for human reassurance, to make robots obedient servants (Law #2) with some
built-in limits on what could be commanded (Law #1).  Under such a robot-wary
interpretation of the Laws, it is reasonable to distinguish what a robot may
do from the value of the end result: if a robot is caught harming humans,
human fear sets in, robots might be outlawed, and the future might be set
back by decades.

Even for the more modest goal of robotic servants, I suspect that Asimov's Laws
of Robotics can't be made reasonable if robots are restricted to reasoning only
with black-and- white features/symbols: not enough subtlety, e.g. in evaluating
expected harm.  (Yes, I realize that with this suspicion I am criticising my
own work to date.)

And even for the modest goal, I certainly don't believe that utility theory is
a solution, but for a different reason, as exemplified by the following sorts
of questions:
   1. Exactly how many robot lives equals one human bruise of what severity?
   2. Are your friends worth more to society as friends, or as food, or as
      saleable raw material, and what is the dollar value of each?
   3. What is the value to you of having lunch at noon instead of 1pm today?
Please answer the above not only with human short-sightedness, but consider the
implications for the eternal future of all humanity.

It seems to me that to produce reasonable behavior there must be a combination
of incomparable priorities (you may not kill your friends no matter how much
their components are worth) and numerical weightings.  Everything we know now
(including the so-called "normative rational" utility theory) is a poor try.
Hence, I still think it's worth examining a specific example in some detail
just to see if we can cast some light on what an ideal robot should do, and
maybe from there we can cast some light on what that robot should think.
Marcel


