Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!yale!zip.eecs.umich.edu!newsxfer.itd.umich.edu!gatech!howland.reston.ans.net!cs.utexas.edu!news.uta.edu!cse.uta.edu!piotr
From: piotr@cse.uta.edu (Piotr Gmytrasiewicz)
Subject: Re: Asimov's Laws of Robotics
Message-ID: <1994Sep17.002739.24860@news.uta.edu>
Sender:  Piotr Gmytrasiewicz 
Nntp-Posting-Host: cse.uta.edu
Organization: Computer Science Engineering at the University of Texas at Arlington
References: <1994Sep10.222618.29202@galileo.cc.rochester.edu> <1994Sep12.025753.27811@news.uta.edu> <marcelsCw4tCL.HGq@netcom.com>
Date: Sat, 17 Sep 1994 00:27:39 GMT
Lines: 47

Marcel Shoppers wrote:

>And even for the modest goal, I certainly don't believe that utility theory is
>a solution, but for a different reason, as exemplified by the following sorts
>of questions:
>   1. Exactly how many robot lives equals one human bruise of what severity?
>   2. Are your friends worth more to society as friends, or as food, or as
>      saleable raw material, and what is the dollar value of each?
>   3. What is the value to you of having lunch at noon instead of 1pm today?
>Please answer the above not only with human short-sightedness, but consider the
>implications for the eternal future of all humanity.


This is an important issue, if only because people may start feeling
uncomfortable when they realize that theory of rational decision-making may
have to rely, in some cases, on trading off human life for material goods
(say money), and reject these theories right off.  What may help alleviate
this discomfort is the realization that each of us makes these kinds of
trade-offs almost every day.  Consider shopping for a car and deciding to
get one that is not equipped with ani-lock brakes, Another car, with ABS, is
more expensive.  What you are, in essence, doing when deciding not get the
latter car, is probabilistically trading off the value of the $$ (= the
price difference) for the value of human life.  You, your family, as well as
total strangers will be now more likely to suffer.  This example can be
repated with any safety feature and price differential, of course.

Given that these trade-offs under uncertainty are happening, the role of
utility theory may be seen, roughly speaking, as an attempt to formally
codify the values, or objectives, of a decision maker.  Decision theory,
then, specifies the rational behavior, given the system of values.  Of
course, some decision makers may lack consistency in their decisions and,
say, trade off value A for value B, then trade B for C, and finally C for A.
This "non-transitivity" would likely mean that a consistent value system
cannot be constructed, thus such a person could not be modeled as a rational
decision maker.
                                                                            
Studies like this have been undertaken.  The objectives of NASA were
specified to use in making decisions regarding alternative plans for space
exploration.  Another project analyzed the objectives relevant to public
policy in Louisville, KY, and the tradeoffs that the citizens representing
diverse segments of the community were ready to make.  See "Decisions with
Multiple Objectives, Preferences and Value Tradeoffs" by Keeney and Raiffa,
Wiley.

Best,

--Piotr
