Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!sunsite.doc.ic.ac.uk!dcs.gla.ac.uk!unix.brighton.ac.uk!mjs14
From: mjs14@unix.brighton.ac.uk (shute)
Subject: Re: Minsky's new article
Message-ID: <1994Nov9.171437.8555@unix.brighton.ac.uk>
Organization: University of Brighton, UK
References: <39f9ruINNbo1@life.ai.mit.edu> <39lf4g$9rg@coli-gate.coli.uni-sb.de> <CyyC64.M5t@world.std.com>
Date: Wed, 9 Nov 1994 17:14:37 GMT
Lines: 21

In article <CyyC64.M5t@world.std.com> btarbox@world.std.com (Brian J Tarbox) writes:
>Actually, HAL didn't screw up, he/it was lead astray by bad instructions
>from  its _human_ creators (as described in the 2nd and 3rd books).  HAL
>acted reasonably given the orders he was given.

I particularly liked the cross-referencing in the biographical play on
Alan Turing, "Breaking the Code".  It turned out, that the 'code' wasn't
just the Enigma Code, but also the social code...
and the author has Turing reflecting on the legal system at his own trial.
The legal system attempts to be an objective machine,
and hence inevitably contains the seeds for its own inconsistencies.


Going back to Penrose (for a change)...
it's only just occurred to me that Asimov predicted Penrose's gravitonic
brain with remarkable accuracy... but as with all fiction, the names had
to be changed (presumably to protect the innocent).  :-)   :-)
-- 

Malcolm SHUTE.         (The AM Mollusc:   v_@_ )        Disclaimer: all

