From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!ub!csn!magnus.acs.ohio-state.edu!usenet.ins.cwru.edu!news.ysu.edu!malgudi.oar.net!caen!sdd.hp.com!think.com!mintaka.lcs.mit.edu!mintaka!rjbodkin Mon May 25 14:05:36 EDT 1992
Article 5675 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!ub!csn!magnus.acs.ohio-state.edu!usenet.ins.cwru.edu!news.ysu.edu!malgudi.oar.net!caen!sdd.hp.com!think.com!mintaka.lcs.mit.edu!mintaka!rjbodkin
>From: rjbodkin@theory.lcs.mit.edu (Ronald Bodkin)
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Message-ID: <RJBODKIN.92May15034757@lister.lcs.mit.edu>
Date: 15 May 92 08:47:57 GMT
References: <1992May11.160456.15469@math.okstate.edu>
	<1992May11.183017.14806@psych.toronto.edu>
	<1992May11.210524.30977@mp.cs.niu.edu>
	<1992May12.002440.5501@psych.toronto.edu>
	<unaphINNpv8@early-bird.think.com>
Sender: news@mintaka.lcs.mit.edu
Organization: MIT Lab for Computer Science
Lines: 13
In-Reply-To: moravec@Think.COM's message of 12 May 92 02:33:21 GMT

In article <unaphINNpv8@early-bird.think.com> moravec@Think.COM (Hans Moravec) writes:
   ... the usual social ethics ... are a pragmatic system,
   not a higher truth ...

   Brining things closer to home, if the two of us were trapped foodless
   in a Andean winter, maybe we would have to draw straws for who eats who.

	And why would you draw straws, instead of just killing him
while he cuts the straws?  It appears that you are, in fact,
obeying some kind of ethics since risking an increased chance of death
isn't exactly "pragmatic."

		Ron


