From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Wed Feb  5 11:56:06 EST 1992
Article 3386 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb1.235203.28395@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan29.214150.1709@bronze.ucs.indiana.edu> <1992Jan30.204029.27574@psych.toronto.edu> <1992Feb1.215002.7208@bronze.ucs.indiana.edu>
Date: Sat, 1 Feb 1992 23:52:03 GMT

In article <1992Feb1.215002.7208@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan30.204029.27574@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>>Cognitive science is all about explaining human action, and it
>>>seems to me that that can be done without invoking phenomenal
>>>consciousness.
>>
>>This seems to be a type of behaviourist version of cognitive science.
>>I don't mean to invoke "behaviourism" as a dirty word, but my
>>view of cognitive science has always been that it was the study of the
>>*mind*, and not of behaviour.  I would include phenomenal consciousness
>>as an aspect of the mind.  If this isn't the case, why do AI types *care*
>>whether the computer "actually" understands, as long as it *acts* like
>>it does.  Surely understanding is in part phenomenal.   
>
>Not behaviourist at all.  I think it's fairly uncontroversial that
>the main goal of psychology (or cognitive science) is to explain behaviour.
>One has to distinguish between the goal of the explanatory process
>and the entities appealed to in explanation.  Modern cognitive scientists
>are distinguished from behaviourists by their willingness to appeal to
>internal entities in making their explanations; however, behaviour
>remains the ultimate goal of the explanation.
>
>Whether or not cognitive science *ought* to be concerned with the
>explanation of phenomenal states, as well as that of behaviour, when
>I look around I don't see many cognitive scientists doing this.

But *surely* the reason that Strong-AI types are up in arms about the
Chinese Room is that they don't *just* want to explain *behaviour*, but
also *phenomenal states*.  Otherwise, why should one *care* if the
Chinese Room "actually" understands, as long as it *acts* like it
does *just like* we act like we do.  Surely Dennett, in his latest
book "Consciousness Explained" sees his role, *as a cognitive scientist*,
to explain the subjective nature of experience.  Even eliminativists offer
some account as to why we are wrong to *believe* that there are such
things as phenomenal states (at least as popularly conceived).  

I'm afraid I simply don't agree with you on this one.

>>Here I would violently disagree, at least if I understand you.  All of
>>utilitarianism is based on the importance of phenomenal experience.  
>>All of ethics proper is based on the notion of free will and choice
>>(else there would be no ethical *decisions*, and so no ethics).  If
>>materialism *is* true, and all of behaviour is completely predictable
>>from material interaction, then no such decision process can exist.
>>This is, of course, old stuff.
>
>Well, there's a lot I could say here.  For a start, despite my views on
>consciousness, I still think that behaviour is determined by physical
>processes, and I don't see why that's incompatible with free will in
>any sense in which it's clear that we possess it (e.g. the ability to
>do what we want, other things being equal).  More deeply, I think
>that the mental properties that are relevant to ethics are much more
>likely to be *psychological* properties, such as beliefs, desires,
>hopes, fears, etc, rather than phenomenal properties.  And our
>psychological states (i.e. states characterized by their role in the
>causation/explanation of action) seem to be entirely explainable within
>a materialist/functionalist framework, in principle.

Hmm...I honestly don't think I buy *any* of the above.  First of all, I
am not *at all* convinced that materialism is compatible with free will
(I know, I know, there's tons of folks who have argued otherwise, and to
be honest, I am not that familiar with the area, so I'll just leave this
point as is).  More deeply, I think the reason that we *care* about
the psychological states from an ethical point of view is *because* of
their phenomenal components.  Why should it *matter* if someone's hopes
are thwarted, or if someone is afraid, if not for the fact that we
don't like these experiences _qua_ experiences?  Why should we be worried
about the satisfaction of desires *beyond* the fact that to do so is, well,
"satisfying"?  Indeed, it seems to me that if one had an entity which
exhibited such "psychological" states (states characterized by their
role in the causation/explanation of action), but did *not* possess
*phenomenal* states as well, then such an entity would *not* be an
object of moral concern.  Morality does not arise from merely "taking
an intentional stance" as per Dennett - there are lots of entities whose
actions we can explain by resorting to psychological states without
ascribing ethical worth to these entities. 


In addition, even if the above were *not* true, it is *still* the case
that the vast majority of utilitarian systems are built on the importance
of *phenomenal* states.  I don't care if you *act* like I caused you pain -
I want to know if I *did* cause you pain, if you *experienced* pain.  This
distinction makes a *moral* difference to utilitarians.  Indeed, if the
whole world population were made numb, unable to experience pleasure and pain,
then for utilitarianism there would be no moral distinctions available, and
any action would be morally permissible.  (Note that here "pleasure" and
"pain" also include the cognitive versions of these terms, such as "I
enjoy a Mahler symphony", or "It hurts me to see a job poorly done".)


- michael



