From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!usc!snorkelwacker.mit.edu!news.media.mit.edu!nlc Mon May 25 14:05:33 EDT 1992
Article 5670 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!usc!snorkelwacker.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Subject: Re: Taxonomy
Message-ID: <1992May15.030331.15684@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <1992May13.174643.17539@organpipe.uug.arizona.edu> <1992May14.053930.22599@news.media.mit.edu> <l164npINNlmg@exodus.Eng.Sun.COM>
Date: Fri, 15 May 1992 03:03:31 GMT
Lines: 53

In article <l164npINNlmg@exodus.Eng.Sun.COM> silber@orfeo.Eng.Sun.COM (Eric Silber) writes:
>In article <1992May14.053930.22599@news.media.mit.edu> nlc@media.mit.edu (Nick Cassimatis) writes:
>>...
>>What really belongs on something
>>like on comp.ai.phil are discussions concerning the sub-personal,
>>personal and social structures and mechanisms behind the the
>>phenomenon of morality.  The other stuff is better saved for something
>>like comp.ai.politics.
>
> Wrong taxonomy! "the sub-personal,
> personal and social structures and mechanisms" etc might be properly
> discussed in comp.ai re: implementation details etc., it is however
> very appropriate to comp.ai.philosophy to discuss the ethics of
> cognitive agents!!!! ( "there's more to life than news, weather, and
> the Goedelsatz" )

Strictly speaking, discussions of the "architecture of morality and
moral agents" (I can't think of a better term right now) would not
belong on comp.ai either because there aren't enough (solid!) ideas on
morality to talk about "implementation details."  The problem is to
find the structures, functions, and processes of morality and to see
what kinds of ideas they can give us about the intelligence needed to
embody them (whether it be biological or artificial.)  Given that the
stage of such inquiry is pre-pardigmatic (as Kuhn would lable it) I
think it would be as close to philosohpy as anything: hence,
comp.ai.phi.  (Even if the discussion I suggest does belong elswhere,
I think that my last post belonged on comp.ai.phil as it was a
*methedological* point.)

But the taxonomical issue is moot.  The point of my last post was that
by discussing the ethics of "cognitive agents" with the looseness that
has been characteristic of much (though not all of) the discussion
here, we are missing so many crucial questions that need to be posed
that would  engender progress in AI.  

Wherever the discussions I've complained about take place, they are a
relative waste of energy (if our main concern is to actually achieve
AI.)  THERE ARE SO MANY QUESTIONS THAT ARE BEING IGNORED with the
assumption (implicit or explicit) of absolute morality.  When you talk
about giving ethics to a robot without understanding ethics, you are
commiting two sins: (1) wasting time on you know-not-what (2) sweeping
under the rug the issues that are so important.

If all of the cleverness that is being expended on arguing for a
floppy disk's right to an abortion was spent in the endeavor to
understand the social, psychological, and structural preconditions for
moral behavior and sentiment, we would actually be making progress
towards AI, instead of tying verbal knots.

So if these questions belong elswhere, then by all means, let's answer
them there -- BUT WE HAVE TO ASK THEM FIRST!

-Nick


