From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!wupost!emory!gwinnett!depsych!rc Wed Feb  5 11:56:00 EST 1992
Article 3376 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!wupost!emory!gwinnett!depsych!rc
>From: rc@depsych.Gwinnett.COM (Richard Carlson)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and Panpsychism
Message-ID: <ye7gFB4w164w@depsych.Gwinnett.COM>
Date: 1 Feb 92 17:00:09 GMT
References: <accran.696879325@gsusgi1.gsu.edu>
Lines: 48

accran@gsusgi2.gsu.edu (Robert Nehmer) writes:

> rc@depsych.Gwinnett.COM (Richard Carlson) writes:
> 
> >No matter how many times you go over this it still looks more like
> >a species of Hume's empathy -- the non-logical alternative he
> >proposed and to which Kant was largely reacting.  The fact that
> >you are in the same existential boat as me in terms of filtering
> >information through the categories of your mind may make me see
> >you as similar, which is a logical process in terms of sets or
> >classes, but the additional notion that I should then treat you
> >differently than an entity that is less similar is a normative
> >(prescriptive, deontic, whatever) notion rather than a logical one
> >and seems to be a kind of descriptive statement of a prescriptive
> >feeling.
> 
> I told you he was stretching it. Most people, Kant included, have never
> been completely satisfied by this. Remember that it was Hume, as you
> state, who woke Kant from his "dogmatic slumber." That's when he 
> wrote the first Critique. But then someone (or maybe several people),
> complained to him that he really hadn't answered Hume. He agreed and 
> wrote the second Critique. But it's never satisfied as a logic of 
> moral necessity. All I will say personally is that I understand what
> Kant is trying to say, and if everyone "saw" experience that way, I
> would too. But it's a damned dangerous position to hold since you can
> only give it to others, you can't "force" them to give it to you. Yet
> Kant's program has had some interesting side effects none-the-less,
> nicht wahr?

Aside from the curious conclusion that the most enlightened
position is sociopathy, which would have the effect via a
Prisoner's Dilemma-like mechanism of reducing all social
interaction to zero-sum games, with the final climax state of a
Hobbesian anomie, the relation of logic to prescription is
interesting in other ways.  The 3 laws of robotics -- from Asimov
or Heinlein, I could never remember which -- are too different
from the mechanisms governing moral behavior in humans to ever be
even considered for a Turing test.  You'd know right away that the
computer wasn't human as soon as it told you it couldn't do you
any harm or allow you to come to harm or do yourself harm.  (And
we've all seen Jim Kirk outwit a couple of dozen do-good robots
which had got out of hand!)

--
Richard Carlson        |    rc@depsych.gwinnett.COM
Midtown Medical Center |    {rutgers,ogicse,gatech}!emory!gwinnett!depsych!rc
Atlanta, Georgia       |
(404) 881-6877         |


