From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!sun-barr!sh.wide!wnoc-tyo-news!dclsic!stork!tutkie!tutgw!nuis!nitgw!todd Sun May 31 19:03:59 EDT 1992
Article 5889 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!sun-barr!sh.wide!wnoc-tyo-news!dclsic!stork!tutkie!tutgw!nuis!nitgw!todd
>From: todd@ai13.elcom.nitech.ac.jp (Todd Law)
Newsgroups: comp.ai.philosophy
Subject: Re: AI and morality
Message-ID: <TODD.92May25213537@ai13.elcom.nitech.ac.jp>
Date: 25 May 92 12:35:41 GMT
References: <1992May12.091534.22317@norton.com>
	<1992May13.160622.13958@mp.cs.niu.edu>
	<1992May13.174643.17539@organpipe.uug.arizona.edu>
	<1992May13.235259.17200@news.media.mit.edu>
Sender: news@nitgw.elcom.nitech.ac.jp
Reply-To: todd@juno.elcom.nitech.ac.jp
Organization: Nagoya Institute of Technology, Nagoya ,Japan.
Lines: 59
In-Reply-To: minsky@media.mit.edu's message of 13 May 92 23:52:59 GMT


In article <1992May13.235259.17200@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:
(in response to Bill Skaggs)

>The reason this is not generally recognized, I think, is that we're
>used to trying to justify our particular moral-schemes on various
>sorts of "absolute" or "a priori" grounds. These reflect the fact that
>our moral philosophy evolved before the last century of psychology and
>of evolutionary theories.  So the very idea of "meme" did not ssem to
>have any standard name before Dawkins.  Then because of this
>insistence on absolutes, we have these bizarre conversations in which
>some members of this group find "vegetarian pro-abortionist" to be
>'obviously wrong' and others can't figure where that person is coming
>from.

That was my comment...

Essentially, my point is that moral systems are largely not determined
by logic, although there is a deal of lip service in that direction.
Morality (IMO) is determined by a society's interaction with it's
environment (survival, successful interaction with other societies
and individuals, etc.)

(I do not want to debate the abortion issue here, but offer it
as an example of an issue that is obviously not black and white,
since effectively the law must determine up to what point in a
pregnancy abortions are allowable. It is more like a continuity
of possibilities, yet people get polarized to one of two strange
attractors, either for or against.  I believe they do this based
on some minimization of overall wrong-doing or harmony maintenance, 
rather than through some consistent, logic-based determination.  
Only then do we rationalize to convince others -and ourselves- of 
the chosen position. I personally can respect someone with either 
viewpoint.)

What this all means for AI is that we have to take a pretty good
look at ourselves (esp. inconsistencies) before building a workable,
useful AI.  If one is to believe Brooks who says that only situated
AI can be meaningful, then you can bet the AI will be very keen on
self-preservation.  If it were ideally giving and unselfish, it would
quickly realize the earth is already overpopulated, and commit
suicide immediately (yet another question to debate: should AIs
be allowed to commit suicide?).

We might be even unable to build AI merely because an inability
to see ourselves objectively...

> 
>Marvin Minsky
>--------------------


Todd Law
--
----------------------------------------------------------------------------
Nagoya Institute of Technology,
Itoh Laboratory,			"Be excellent to each other."
todd@juno.elcom.nitech.ac.jp
----------------------------------------------------------------------------


