Newsgroups: comp.robotics.research
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.mathworks.com!uunet!in2.uu.net!newsflash.concordia.ca!sunqbc.risq.net!hobbit.ireq.hydro.qc.ca!NetNews.IREQ.Hydro.QC.CA!mboyer
From: sstones <sstones@io.org>
Subject: Re: The Three Laws of Robotics...
X-Nntp-Posting-Host: pellan.ireq-robot.hydro.qc.ca
Message-ID: <MBOYER.95Aug9174743@pellan.ireq-robot.hydro.qc.ca>
Lines: 65
Sender: news@ireq.hydro.qc.ca (Netnews Admin)
Organization: <empty>
Date: Wed, 9 Aug 1995 21:47:43 GMT
Approved: mboyer@ireq-robot.hydro.qc.ca, crr@ireq-robot.hydro.qc.ca

In article <1995Aug3.011304.100073@kuhub.cc.ukans.edu>,
Duller  <102120.426@CompuServe.COM> wrote:

>Hopefully someone has tryed to use [Isaac Asimov's three laws of
>robotics] to make an intelligent robot. I say they should be used,
>not just to make safe robots, but it would be alot easier to program
>a robot if you concentrated mainly on those three factors. So I'm
>looking for any news, magazine, or other articles concerning the use
>of Isaac's three laws of robots. If you know any articles, please let
>me know if they can be found anywhere.

[ Moderator's note:
  There has been a number of replies to the above post that were
  dealing mainly with the "intelligence" of robots, or with Asimov's
  view of the laws.  While this is an interesting subject, comp.ai and
  the science-fiction groups are better suited to such discussions.
  I'd like to keep this one on track...  -MB ]


Isaac Azimov was using the word "Robots" to describe machines that
would today be considered "Androids".  R Daneel is a cool idea, but is
not possible in todays technology (R Daneel, for those unfortunate
enough to be unfamiliar with Azimov's works, is akin to Lieutenant
Commander Data of Star Trek: The Next Generation).

Programming a robot (Robot, not android) is like programming a
computer.  If you don't program it to kill people it won't "try to"...
It's not an artificial intellegence.

Just like your computer runs the programs you tell it to, the robot
moves specifically as it's programmed.  If someone gets in the way,
the robot doesn't even know that it's killing them.  I got hit by a
PUMA (pretty big, and stronger than it looks) once, because I wasn't
paying attention to which part of its routine it was running, and
tried to take some items off of its bench (It'll never happen again).
The PUMA wasn't responsible, it was performing its function perfectly.
The programmer wasn't responsible, it was quite safe until I stepped
in.  The robot had no more chance of "not harming a human" than a car,
driven by a drunk.

There are treadles and things that you can use to determine if someone
is walking up to the robot (you know, like the things that are wired
to the door openers at the supermarket) and the robot can be
programmed to stop if someone gets too close.  But again it's all just
following program... Not a concious decision by the robot.

On the other hand a robot could, just as easily, have been programmed 
to sit, appearantly dormant, until someone steps on the treadle.
Other than lawsuits, I don't know what this would provide for the
programmer, but it's just as easy to program as the safety factors.
Who knows... Perhaps it's the new level of home security...  Get
beaten to a pulp by a robot if you enter without a key.

Oh, and keep reading the Azimov... Great stuff.

Cheers

<sstones@io.org>
SStones             Toronto,  Ontario.


--
 *********************** (moderated) ***************************
     Submissions:                Meta-discussions/information:
 crr@ireq-robot.hydro.qc.ca   crr-request@ireq-robot.hydro.qc.ca
