From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!wupost!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Feb 11 15:25:29 EST 1992
Article 3561 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!wupost!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb6.222128.18717@bronze.ucs.indiana.edu>
Date: 6 Feb 92 22:21:28 GMT
References: <1992Feb5.183955.13789@psych.toronto.edu> <1992Feb6.051835.21146@bronze.ucs.indiana.edu> <1992Feb6.185713.11504@psych.toronto.edu>
Organization: Indiana University
Lines: 88

In article <1992Feb6.185713.11504@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>Well, I'm still a bit confused.  Are you an instrumentalist in
>*practice*, but not in *theory*?  If so, what reason do you give for
>saying that a humungous lookup table, which produces the right
>behaviour, *doesn't* have beliefs.  If the answer is something like
>"it doesn't have the appropriate functional relations", then do you 
>have a working definition of what these functions are that isn't
>simply motivated by ruling out lookup tables?          

I'm a functionalist about belief, but I think that instrumentalism
about belief attribution works for most cases that aren't implausible,
carefully cooked-up counterexamples.  Of course I don't have a full
account of the required functional organization, but it requires
that the system possess internal states that don't only lead to
the right behaviour, but also interact with each other in appropriate
ways; e.g. a desire that P, and a belief that if Q then P, and a
belief that Q is easily attainable and doesn't have other bad
side-effects, should cause a desire that Q, other things being equal.
This is nothing profound -- it's all in the early functionalist
literature (see e.g. the works of Lewis such as "Psychophysical
and theoretical identification", Australasian Journal of Philosophy
50:249-58, 1972), and certainly needn't involve an appeal to
phenomenology.

>Well, I think we have different intuitions, and the difference rests,
>I believe, on my indecision about the causal efficacy of qualia.  
>You clearly are an epiphenominalist with regards to qualia,

An epiphenomenalist in some senses, though it depends on how you
construe causation.  I certainly believe that there are nomic,
counterfactual-supporting correlations between qualia and the
physical events that one might think they cause, so if that's all
that one requires for causation, then I'm not an epiphenomenalist.
In other senses of causation, I may be (though I wouldn't say
that qualia are "caused" by the brain, but that they're dependent
on it in some non-causal sense, e.g. nomic supervenience).
"Property dualism" might be a better classification of my view.

>Heck, isn't the fact that we *talk* about qualia evidence for their
>causal efficacy?  (I don't mean this as jokingly as it might first
>appear...).

That's a very profound question, and is certainly the central
question that a property dualist must answer.  Given that all
physical events can be explained physically, and that the things
we say about qualia (including all the things I've typed into this
computer over the past few weeks) are physical events, then it
might seem that the things we say about qualia have got nothing to
do with qualia at all (we'd have said the same things if we were
zombies, after all).  So one is tempted to lapse into some form
of materialism or else interactionist dualism, both of which have
severe problems of their own.

I think that consideration of this question is one of the few ways by
which one might actually make progress on the problem of qualia. My
paper "Consciousness and Cognition" is mostly devoted to just this.
Essentially, given that there exists a physical/functional explanation
of the things we say (or believe) about qualia, then we must require
that the explanatory basis of qualia *cohere* in some strong sense with
this physical/functional explanation.  In that paper I make a brief
attempt at characterizing the physical/functional explanation of why we
say the things we do about qualia (not only why we say we have them,
but why they seem so strange and mysterious), and come to the
conclusion that it's because of the fact that later processing in the
brain only has access to *informational* states in earlier processing,
i.e. only to certain raw differences in state that make a causal
difference, rather than to underlying physical states, or to distal
causes in the environment; because of this, the system is simply thrown
into various different states without having an explanation as to why.
When you ask the system how it makes a distinction, e.g. between
differently-coloured objects, it doesn't have any good answer available
apart from something along the lines of "they're just different,
qualitatively".

You can run an explanation like that without invoking *actual* qualia
at all.  But then if one believes in actual qualia, it seems
necessary to at least say that the properties of qualia cohere with the
explanatory basis of why we say/think we have qualia, as otherwise
the things we say wouldn't reflect the properties of qualia at all.
That's another reason why I was led to the view that the basis of
qualia is information-processing, and in fact that one might expect
qualia to arise from even the simplest kinds of information-processing.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


