From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!saimiri.primate.wisc.edu!ames!agate!spool.mu.edu!uwm.edu!linac!unixhub!stanford.edu!Csli!avrom Mon Nov  9 09:36:45 EST 1992
Article 7512 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!saimiri.primate.wisc.edu!ames!agate!spool.mu.edu!uwm.edu!linac!unixhub!stanford.edu!Csli!avrom
>From: avrom@Csli.Stanford.EDU (Avrom Faderman)
Subject: Re:  Brain and mind (killing boring wife)
Message-ID: <1992Nov4.234723.22038@Csli.Stanford.EDU>
Organization: Stanford University CSLI
References: <1992Nov4.234237.21662@Csli.Stanford.EDU>
Date: Wed, 4 Nov 1992 23:47:23 GMT
Lines: 45

In article <1992Oct22.154730.585@cine88.cineca.it> av10@cine88.cineca.it 
writes:

| But this Justice is not just, I'm a pure mechanism, why it punishes Me?
| Does punish Physical and Biological Laws
| THEY have the responsibility of my actions!

This rests on a notion of responsibility that includes complete
incompatiblist free will.  I ask you to reconsider it:  Imagine a society 
that was generally evil.  All the customs, all the mores of this society 
are complete inverses of our own--wanton torture is considered a perfectly 
acceptible way to pass an afternoon.  I think even a hard-core anti-mechanist 
would agree that, without exposure to other societies, it would be next to 
impossible for any member to abandon these evil ways.  But does this mean 
that they are not morally responsible for their actions?  The point is that 
they _are_ evil, however they got that way.  The same applies in the case 
of laws of the harder sciences.

| If robots and humans are equivalent which is the value of You and Me. 
| Can I sell or buy a robot? Yes? THEN I can sell or buy a man, Slavery 
| is not wrong in principle.

CAN you sell or buy a robot?  (I mean morally).  Clearly you can sell or 
buy _some_ robots (all that are currently in existence), but it is by 
no means clear to me that you could sell or buy a robot that had  
considerable functional similarity to a human.  Trying to argue that 
you can't would be almost equivalent to proving the thesis of Strong 
AI;  I won't attempt it here.  However, to take some of the _absurdity_ 
out of the idea, and (perhaps) to come to accept it on an emotional 
level, there are several pieces of literature that are very convincing.  
In particular, I would reccomend _Do_Androids_Dream_of_Electric_Sheep_,
by Philip K. Dick, and (a lesser-known source that illustrates the idea 
that even far sub-human machines have _some_ moral considerations tied 
up with them) a chapter from Terrel Miedanner's _The_Soul_of_Anna_Klane_,
quoted in Hoftadter and Dennet's _The_Mind's_I_ as "The Soul of the
Mark III Beast."  Both of these works are particularly good in that they 
don't simply _assume_ artificial entities can be moral objects; they 
let you draw your own conclusions.


-- 
Avrom I. Faderman                  |  "...a sufferer is not one who hands
avrom@csli.stanford.edu            |    you his suffering, that you may 
Stanford University                |    touch it, weigh it, bite it like a
CSLI and Dept. of Philosophy       |    coin..."  -Stanislaw Lem


