From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Wed Feb  5 11:55:51 EST 1992
Article 3362 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb1.200516.12634@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan29.194537.3658@colorado.edu> <1992Feb01.025045.455@norton.com>
Date: Sat, 1 Feb 1992 20:05:16 GMT

In article <1992Feb01.025045.455@norton.com> brian@norton.com (Brian Yoder) writes:
>tesar@tigger.Colorado.EDU (Bruce Tesar) writes:
> 
>>     I'm not sure how useful a statement like the above is, unless the
>> various moral philosophies come equipped with useful definitions of
>> conciousness. In your case, you might want to figure out which is the
>> cart and which is the horse. Does it make sense to base your entire
>> moral system on consciousness, and THEN decide what consciousness is?
>
>It certainly is a case of getting the cart before the horse.  Before you can
>draw any conclusions about morality to need at least to have a theory of 
>epistemology, otherwise how could you even know whether there are any facts 
>which can be known?  Or how to derive them?  Or Whether you can be certain 
>they are true?  Or what certainty and truth are?  Likewise, you can't develop
>a theory of epistemology without first determining some things about reality.  
>Metaphysical questions like "Does anything exist at all?" or "Is reality just
>an illusion generated by my mind?" or "Can contradictions exist?" are the basis 
>for any epistemological theory.  Jumping in at the middle allows unstated 
>assumptions and unproven premises to invade your system of thought and 
>cast you into hopeless contradictions.

But prior to Functionalism, ethics had all the epistemology it needed with
reference to consciousness.  We knew (or thought we did) that only organic
life could be conscious.  Thus, non-organic things were not possible agents
of moral concern.

As I see it, the quandry for Functionalists (at least those that care about
ethics) is: what does the fact that non-organic things can have consciousness
(again, according to Functionalism) *do* to ethics?  As far as epistemology,
Functionalism provides one with respect to consciousness (gee, isn't the
Turing Test the way we know if something is conscious? 1/2 :-).  However,
even if we don't have a complete epistemology, Functionalism still has 
ontological implications which can be debated even if we could never *in
principle* know the ontological status of any specific entity.

>>     As for alternatives, what is wrong with simply declaring that
>> HUMAN BEINGS are the agents worthy of moral concern? The category is
>> well-recognized in all cultures, and even has a sound scientific meaning.
>> A computer, no matter how intelligent and/or conscious it becomes, is
>> still not a human being.
>
>To answer that question it is necessary to determine what the purpose for morality
>is.  Without the answer to that question, the question "Is this moral position
>a correct application of morality?" cannot be answered.
>
>>     Are the comatose and the severely retarded less worthy of moral
>> consideration than fully functional humans?
>
>No, they are not,

Well, you are far more certain than many philosophers, and take a stance
somewhat different to that which society has, and inferred through
our general treatment of such individuals. 


>           but without understanding the roots of the ideas here neither
>the question nor the answer make any sense.  The immediate question of "Why should
>one be moral?" has to be answered to allow the answer to that question.  Some
>would say that "The purpose of morality is to make God happy." If that were the case
>all one would have to do to determine if something was immoral would be to consult
>scriptures or pray.  Others would say that "The purpose of morality is the minimization
>of pain in society.". 

This position is not a *purpose* of morality, but a moral *position*,
namely, Utilitarianism. 


> If that were true you would have to measure the pain the
>comatose or retarded person is in and act accordingly.  Still others would
>say "Morality is just an arbitrary social choice." This kind of person would
>just have to go take a poll to determine what is moral.  My answer to the 
>question would be "Morality is a set of principles to guide me in living my life."
>So to answer the question I would have to determine the effects of ignoring the
>moral status of such people.  Can you see how the answer to this more 
>fundamental question is necessary to draw the higher-level conclusion?

Well, if by effects you mean "societal outcomes", then this is not necessarily
the case, as non-consequentialist ethical systems do not concern themselves
with outcomes, but (loosely speaking) with intent.   So, your statement
above would only apply to certain ethical systems, and not to ethics as a
whole.

- michael



