From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!rpi!ghost.dsi.unimi.it!37.1!avl0 Tue Nov 24 10:51:32 EST 1992
Article 7600 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!rpi!ghost.dsi.unimi.it!37.1!avl0
>From: avl0@cine88.cineca.it
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and mind (killing boring wife)
Message-ID: <1992Nov11.172601.632@cine88.cineca.it>
Date: 11 Nov 92 17:26:01 +0100
Organization: CINECA, Italian Interuniversity comp. centre
Lines: 100

In article <1992Nov4.234723.22038@Csli.Stanford.EDU>
avrom@Csli.Stanford.EDU (Avrom Faderman) writes:

>In article <1992Oct22.154730.585@cine88.cineca.it> av10@cine88.cineca.it 
>writes:
>
>| But this Justice is not just, I'm a pure mechanism, why it punishes Me?
>| Does punish Physical and Biological Laws
>| THEY have the responsibility of my actions!
>
>This rests on a notion of responsibility that includes complete
>incompatiblist free will.  I ask you to reconsider it:  Imagine a society 
>that was generally evil.  All the customs, all the mores of this society 
>are complete inverses of our own--wanton torture is considered a perfectly 
>acceptible way to pass an afternoon.  I think even a hard-core anti-mechanist 
>would agree that, without exposure to other societies, it would be next to 
>impossible for any member to abandon these evil ways.  But does this mean 
>that they are not morally responsible for their actions?  The point is that 
>they _are_ evil, however they got that way.  The same applies in the case 
>of laws of the harder sciences.

By your example you imply a concept of Moral as an abstract (then absurd)
set of rules. Moral judges actions in relation with a Fundamental Goal.
The society you described is a moral one if the Fundamental Goal of its people
is to have a funny afternoon.
Now the question is: what is the Fundamental Goal for the human beings and what
for robots.
The Human Goal is Happiness, individual last Happiness. Human Moral concern 
this Goal. Any conscious action taken for this goal it's a truly moral one.
BUT.... to continue to be moral, one must ask to him self "does this action 
bring me nearer to Real Last Happiness, or can it reveal it self as a loss?".
Without this attitude any action is nor moral nor immoral, it is a-moral. 
Because, to be moral, one must acknowledge that he does not know what is 
really useful for his Real Individual Last Happiness, being evident that we
constantly fail in this quest.
And perhaps there's nothing in the world that can satisfy it.
That is THE PROBLEM. And That is the VALUE of the man, of ME and YOU.

Responsibility is merely a consequence of free-will, no matter if the action
is moral or not.
IF your society has a Law (moral or not) *AND* your people have free-will
THEN your people can be punished.
IF there's no free-will
THEN you can punish nobody because nobody is responsible of his actions.

Free-will then Responsibility, and existence of a Fundamental Goal 
are "conditio sine qua non" for a Moral which presume that
*YOU* can judge your actions.

>| If robots and humans are equivalent which is the value of You and Me. 
>| Can I sell or buy a robot? Yes? THEN I can sell or buy a man, Slavery 
>| is not wrong in principle.
>
>CAN you sell or buy a robot?  (I mean morally).  Clearly you can sell or 
>buy _some_ robots (all that are currently in existence), but it is by 
>no means clear to me that you could sell or buy a robot that had  
>considerable functional similarity to a human.  Trying to argue that 
>you can't would be almost equivalent to proving the thesis of Strong 
>AI;  I won't attempt it here.  However, to take some of the _absurdity_ 
>out of the idea, and (perhaps) to come to accept it on an emotional 
>level, there are several pieces of literature that are very convincing.  
>In particular, I would reccomend _Do_Androids_Dream_of_Electric_Sheep_,
>by Philip K. Dick, and (a lesser-known source that illustrates the idea 
>that even far sub-human machines have _some_ moral considerations tied 
>up with them) a chapter from Terrel Miedanner's _The_Soul_of_Anna_Klane_,
>quoted in Hoftadter and Dennet's _The_Mind's_I_ as "The Soul of the
>Mark III Beast."  Both of these works are particularly good in that they 
>don't simply _assume_ artificial entities can be moral objects; they 
>let you draw your own conclusions.

Coming back to the robots... you acknowledge that human beings has some value,
you try derive it from his functionality. Then, robots which have a similar
functionality, have the similar value.
Let me be a little presumptuous, I have just built the dreamed android.
It's Me its creator, really are you saying that I can not switch it off?!
*I* wanted it to behave like my self, *I* wanted it to have e.g. electronic
dreams to re-organise acquired information, *I* want it to stop living, why not?
It's my creature and *I*am its creator and owner, no one can break this 
relation.
Functionality or behaviour are really poor baseless metaphysical
arguments against that.
Old Romans were much coherent with a materialistic view of Man. They 
acknowledge to the "Pater Familiae" the life_or_death_right over sons and 
daughters. And they did really apply this right mainly killing new-born 
daughters after the first-born one because they judged them useless. 
Hitler also was coherent: mad people have less functionality (or not at all) 
than others humans THEN they are less (or not at all) human than the others, 
and He killed them. Why not? Because it is immoral?
No, of course, you can not have a Moral because you do not admit free-will.

So, we come back to my "per absurdum" demonstration. There MUST be a difference
between Humans and Androids otherwise our world is absurd.

+------------------------------------------------------------------------------
! avl0@cineca.it	Marco Voli - Supercomputing Group
! ph. +39-51-598411	CINECA - Interuniversitary Computing Centre
! FAX +39+51-598472     via Magnanelli 6/3 - 40033 Casalecchio (BO) ITALY
+------------------------------------------------------------------------------
"To be or not to be, that is the problem" W.S.



