Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Minsky's new article
Message-ID: <CzFqon.94L@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <39f9ruINNbo1@life.ai.mit.edu> <39lf4g$9rg@coli-gate.coli.uni-sb.de> <CyyC64.M5t@world.std.com>
Date: Thu, 17 Nov 1994 23:31:35 GMT
Lines: 17

In article <CyyC64.M5t@world.std.com> btarbox@world.std.com (Brian J Tarbox) writes:
>|> gyro@netcom.com (Scott L. Burson):
>|> >I think that Clarke and Kubrick in _2001_ tapped into a very
>|> >profound truth: if a machine is placed in charge of anything,
>|> >it will screw up.
>|> 
>|> Not a profound truth, but an ludicrously ignorant prejudice.
>
>Actually, HAL didn't screw up, he/it was lead astray by bad instructions
>from  its _human_ creators (as described in the 2nd and 3rd books). 

Ok.

>HAL acted reasonably given the orders he was given.

He did?  It was reasonable to kill people?

