Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!news-feed-1.peachnet.edu!news.duke.edu!eff!news.kei.com!travelers.mail.cornell.edu!newstand.syr.edu!galileo.cc.rochester.edu!prodigal.psych.rochester.edu!stevens
From: stevens@prodigal.psych.rochester.edu (Greg Stevens)
Subject: Re: I kill therefore I am
Message-ID: <1994Nov7.161005.11317@galileo.cc.rochester.edu>
Sender: news@galileo.cc.rochester.edu
Nntp-Posting-Host: prodigal.psych.rochester.edu
Organization: University of Rochester - Rochester, New York
References: <39jme8$72p@beta.qmw.ac.uk> <39lg0b$k3s@newsbf01.news.aol.com>
Date: Mon, 7 Nov 94 16:10:05 GMT
Lines: 23

In <39lg0b$k3s@newsbf01.news.aol.com> rahill@aol.com (RAHill) writes:
>In article <39jme8$72p@beta.qmw.ac.uk>, spencer2@vaxb.mdx.ac.uk writes:

>>Any what I wanted to know was your idea on, how would you incorparate a
>>safety system so that its wasn't able to commit murder.

>The inability to commit murder implies, an ability to determine if an
>object is 'human'.  If you think about it, that's a really tough problem
>for a computer.

So here's an idea --

Let's define "murder" as the destruction of a conscious, intelligent
being.  Let's set up a robot that functions intelligently (behavior)
and is also programmed to not be able to murder.  Now, put two of them
in a room and give them the command to destroy everything they can.

What do they do?  And is THIS a good test for consciousness for machines?

Greg Stevens

stevens@prodigal.psych.rochester.edu

