From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!think.com!ames!olivea!uunet!snorkelwacker.mit.edu!news.media.mit.edu!nlc Mon May 25 14:05:49 EDT 1992
Article 5699 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!think.com!ames!olivea!uunet!snorkelwacker.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Newsgroups: comp.ai.philosophy
Subject: Re: AI failures
Message-ID: <1992May16.125258.15430@news.media.mit.edu>
Date: 16 May 92 12:52:58 GMT
References: <92May14.134243edt.47895@neat.cs.toronto.edu> <1992May14.234328.12094@news.media.mit.edu> <92May16.003923edt.48037@neat.cs.toronto.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 59

In article <92May16.003923edt.48037@neat.cs.toronto.edu> cbo@cs.toronto.edu (Calvin Bruce Ostrum) writes:

>... I remind myself and others
>of Marx's own dictum that the point is not to understand reality, but to
>change it.  

Even if "the point" were to change reality (and it is a dismall and
unlikely view of humanity to think that we are doomed to be discontent
with reality), how could one know which changes to make go about
without knowing the principles governing reality -- i.e. without
understanding it?)


>I agree with Marvin that circular reasoning can be virtuous, although 
>it is not clear to me exactly how (please, everybody, no one-line
>solutions ...


>| >  The problem with all this theorizing about morality is that it's
>| >inescapably circular.  This is immediately obvious once you
>| >realize that the fundamental question is:  "What is the right
>| >system of morality?"

>To me, the gist of this statement is that there is something *wrong*
>with moral reasoning, and that this is exactly because of its "inescapable
>circularity".  

Well, the circularity can perhaps be escaped or made docile by the
following considerations: if we as humans desire a happy, healthy
existence as opposed to a wretched, miserable one, and we recognize
that certain actions are more conducive to health and happiness than
the other alternative, then we perform those actions.  So all of the
harm of the circularity is bottled up into the choice of happiness
over misery. Now this is not a hard choice to make.  So moral
deliberation becomes both a scientific and technological endeavor.
Scientific in that we need to learn the psychological and
physiological conditions of happiness and misery (and all the rest)
and technological in that we need to engineer a lifestyle for
ourselves and a structure to our group (i.e. government, tribe, etc.)
that is conducive to the preservation and pursuit of happiness.

But I'm afraid that much of the discussion on morality here has been
in vain since it ignores the scientific and technological questions
that need to be asked.  To echo Skinner in Walden II (no, I don't
completely advocate that sort of Utopia (or behaviorism), but the book
did have some insights): A priori reasoning is not enough, at some
point we have to empirically study the conditions of human happiness.

With this said, I'll briefly reiterate a point from my previous posts:
In taking the easy road of a priori reasoning and not looking at the
psychological and cultural preconditions of moral behavior: (1) Much
time and cleverness is wasted and (2) the fact that moral behavior
exists (whether properly founded or not) deserves an explanation.
Moreover, such an explanation cannot but enrich our understanding of
the cognitive architecture needed to implement that behavior.  As long
is we stay in the fog of the a priori, we miss some crucial questions
for AI (and our own lives for that matter.)

-Nick


