From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!sol.acs.unt.edu!ponder.csci.unt.edu!danny Mon Dec 16 11:01:09 EST 1991
Article 2041 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!sol.acs.unt.edu!ponder.csci.unt.edu!danny
>From: danny@ponder.csci.unt.edu (Danny Faught)
Newsgroups: comp.ai.philosophy
Subject: First Law of Robotics
Message-ID: <1991Dec11.211355.18082@sol.acs.unt.edu>
Date: 11 Dec 91 21:13:55 GMT
Sender: usenet@sol.acs.unt.edu (Sol USENet Administrator)
Organization: University of North Texas, Denton
Lines: 25

O philosophy-oracles, in your infinite wisdom please find in your hearts
the compassion to tolerate a question from an insignificant
philosophically-illiterate computer-scientist sci-fi lover.

I'm afraid of Isaac Asimov's faith in the First Law of Robotics:
"A robot may not injure a human being, or, through inaction,
allow a human being to come to harm."  There's a line in _I, Robot_ 
that goes "You know that it is impossible for a robot to harm a human 
being; that long before enough can go wrong to alter that First Law,
a robot would be completely inoperable.  It's a mathematical impossibility."
Well, I say that this concept is *physically* impossible.  Hardware
is always at a lower operational level than the software that enforces
the First Law.  There is no way to guarantee that the software won't
get confused about the operation of the robot's limbs and knock its powerful
arm right through someone.

Asimov does illustrate some frightening ways in which the First Law
might reasonably be interpreted.  (Especially "Reason" and "The
Evitable Conflict" in _I, Robot_)

Any comments, or am I beating a dead horse?
-- 
Danny Faught    danny@ponder.csci.unt.edu
UNIX support, Computer Science Department, University of North Texas
"Everything is deeply intertwingled." -Ted Nelson


