From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Fri Jan 31 10:27:25 EST 1992
Article 3307 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Jan30.202546.25545@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan28.165322.25735@colorado.edu> <1992Jan29.162643.29519@psych.toronto.edu> <1992Jan29.194537.3658@colorado.edu>
Date: Thu, 30 Jan 1992 20:25:46 GMT

This is beginning to get more into ethics that cognitive science, but
I'm not sure where to redirect the follow-up, and besides, it's still
tangentially related...

In article <1992Jan29.194537.3658@colorado.edu> tesar@tigger.Colorado.EDU (Bruce Tesar) writes:
>In article <1992Jan29.162643.29519@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

[on why whether or not something is conscious is ethically imporant]

>>Well, to give the argument from authority, consciousness (or some variant
>>on it, such as the ability to have plans, goals, desires) has been seen as
>>the main feature of entities worthy of moral concern since probably the
>>beginning of moral philosophy.  To give the argument from common sense,
>>if you *do* believe in morality, what other distinctions would you draw?
>>(Of course, if you don't believe in the worth of ethics, then this exchange
>>is meaningless.)
>>
>
>    I'm not sure how useful a statement like the above is, unless the
>various moral philosophies come equipped with useful definitions of
>conciousness. In your case, you might want to figure out which is the
>cart and which is the horse. Does it make sense to base your entire
>moral system on consciousness, and THEN decide what consciousness is?

Well, for the most part until the advent of functionalism, we never had 
to worry about it too much, since the *only* things that were believed to
be *potentially* conscious were biological entities.  So it's not too 
surprising that moral philosophers haven't worried about it. 

>    As for alternatives, what is wrong with simply declaring that
>HUMAN BEINGS are the agents worthy of moral concern? The category is
>well-recognized in all cultures, and even has a sound scientific meaning.
>A computer, no matter how intelligent and/or conscious it becomes, is
>still not a human being.

I pity any alien that meets up with you.  Or any sentient dolphin.  Or
any genetically-engineered-to-be-brilliant monkey.  All of these things
we would presumably want to say are worthy of ethical consideration, but
they aren't human.  

To be honest, despite my doubts on the matter, if I actually *believed*
that a computer was conscious, I would think that it was worthy of
ethical consideration as well.  Heck, deleting SHRDLU from your harddisk
may very well be murder! :-)

>    Are the comatose and the severely retarded less worthy of moral
>consideration than fully functional humans?

A sticky question, and one which probes the outer limits of this position.
Of course, many philosophers have answered "yes", and I think it could
be argued that so has society in general, judging from the way such
people are treated.

- michael


