From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Fri Jan 31 10:26:55 EST 1992
Article 3258 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Jan29.170943.4706@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan29.031823.6624@oracorp.com>
Date: Wed, 29 Jan 1992 17:09:43 GMT

In article <1992Jan29.031823.6624@oracorp.com> daryl@oracorp.com writes:

>Michael Gemar writes:
>
>> However, I think that you point out one *very* good reason above to
>> believe that consciousness is *not* merely descriptive, and that the
>> moral consequences if it is.  If there is no fact of the matter
>> whether something is conscious, then morality (or at least most
>> versions of it) goes out the window.  Why should I treat *you* as
>> conscious, if that is merely a "descriptive" term? And therefore, why
>> should I treat you as any more worthy of ethical consideration than a
>> rock, or a roomful of air, or a computer?
>
>Beats me. Why should you treat someone you are in love with any
>differently from someone you hate? Why should your reaction to a
>beautiful piece of music be any different from your reaction to sheer
>noise? I believe these are comparable questions. Your moral,
>emotional, or ethical response to another person or object does not
>depend solely on his, her or its causal properties. Why should it?

Ack!  I may be old fashioned, but I refuse to believe that moral judgement
is on par with musical taste.  If you *honestly* believe this, then
is there any restriction, apart from the problem of punishment, on 
you killing a person?  If Jeff Dahlmer just someone with (pardon the
pun) bad taste?  *I* certainly don't think so... 

As I point out in another posting, most systems of moral philosophy are
based on the distinction between entities that have aspects of consciousness
and those that do not.  If you chose to lose this as a moral difference,
you do so at your peril.  

BTW, I don't thinkt that this is the only option open to AI.  I would
be happy to here from those who feel that it *would* be in some sense
*wrong* to destroy an "intelligent" computer.

>> I don't know how to have a nice conversation with a Chinese
>> person, but I think that they are conscious.  I don't know how to
>> converse with Martians, but if there were any (in the classical sense
>> of course) I would think that they are conscious. Heck, I don't know
>> how to talk with dolphins, but *they* might be conscious.
>
>My using the phrase "have a conversation" was perhaps too sloppy. I
>meant "communicate" in a broad sense. I know definitely that I can
>communicate with Chinese people and with dolphins, and I suspect I
>could communicate with Martians, if there were any.

Well, if you can communicate with dolphins in any meaningful, intelligent
way (as you would a person and not, say, a dog) then you are well beyond
what researchers have currently been able to do.  And if you believe you 
can communicate with Martians, then I think you are incredibly optimistic.
If the assumption of nativism with regard to concepts is correct (and
this is a line pushed by computationalists such as Fodor and Chomsky), then
it may very well be that we simply have different concepts available
to us than do Martians.  We may not be able to communicate meaningfully at
all.  But of course, if this were true it would not mean that Martians did
not have a mental life, but just that it was inaccessible.

>> I guess the difference between us is that you seem to believe that
>> ascriptions of consciousness are merely a matter of taste, not right or
>> wrong, whereas I believe that there is a *fact of the matter* whether
>> or not something is conscious - heck, I know that *I* am conscious,
>> and that *that* is a *fact*, and not merely an ascription.
>
>How do you feel about statements like "X is beautiful", or "X is
>immoral", or "X is unjust"? I don't think it does justice to the
>concepts of beauty, morality, and justice to say that they are
>"matters of taste", but I still don't believe that they are causal
>properties. As the old line goes, "The rain falls on the just and the
>unjust" (or something like that). Nature doesn't care about justice,
>*we* do, and we don't all agree on what justice is. That doesn't make
>it a meaningless concept.

We can debate about the causal efficacy of consciousness all we want, but
*I* know that me being conscious is *not* a matter of taste, it is a *fact*.
If you don't believe that there is a fact of the matter in this instance,
then I am not sure we have common ground to debate.

>Similarly, I am not saying that attributions of consciousness are
>unimportant or meaningless, but that they are not objective.

Again, I am not talking about *epistemics*, but *ontology*.  I don't care
how we decide externally if something is conscious, but what the *fact of
the matter* is (since we might very well be wrong about our external
decision).  If one provided a causal mechanism for consciousnes, as
I understand Functionalism to do, then you *do* have objective criteria
for consciousness, even if you can never actually *experience* what 
any other individual's conscious experience is like.  In this sense,
consciousness is not subjective, although the experience is.  It should
be noted that even Searle believes this - if you find entities with the
approprate physical causal mechanisms, then they are conscious, even though
you can't subjectively experience what they do.

- michael



