From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Mar 24 09:56:12 EST 1992
Article 4497 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Definition of understanding
Message-ID: <1992Mar17.044749.20941@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Mar6.185522.18137@oracorp.com>
Date: Tue, 17 Mar 92 04:47:49 GMT

Daryl McCullough writes:

>Imagine the following process: We start with a human brain. Now, we
>replace one brain cell by a tiny computer running (surprise!) a table
>lookup program, which simply looks up the appropriate neural response
>to each sequence of neural inputs. Gradually, one by one, we take each
>brain cell neighboring the computer, and replace the cell and the
>computer by a new table lookup program that is behaviorally equivalent
>to the two of them. Eventually, the entire brain is replaced by a huge
>table lookup program.

This is a nice example.  I had been wondering about the possibility of
a continuum from brains to lookup tables, but hadn't come up with an
example.  Of course the computer that we replace a brain cell with
won't be "tiny", but humongous as usual, as it has to handle all
possible sequences of inputs, with the resultant combinatorial
explosion.  But set that aside as irrelevant.

This still isn't exactly parallel to my fading qualia argument, as
that argument relied on functional organization being preserved
throughout (this was what ensured that cognitive states like
beliefs would be preserved, whether or not qualia were).  Your
example tampers with functional organization, but keeps behavioural
equivalence.

Because of that difference, I think it's more plausible that qualia
would fade in this case.  In your case, the cognitive structure of
the creature is gradually being "eaten up" by an all-consuming,
relatively structureless, lookup table.  What is important is that
in this case, due to the changing organization, there can actually
be a change in the belief-structure of the organism.  So if qualia
are to fade, we won't have what happened in my case, which was
faded qualia accompanied by the same cognitive structure of
beliefs about qualia, etc.

Of course we will still have a creature that makes *utterances*
like "I have qualia", but that doesn't suffice for belief.  The
fading qualia argument as I presented assumes functionalism about
beliefs and other cognitive states, which is a fairly standard view
in the philosophy of mind.  For your argument to succeed one
would need to assume behaviourism about beliefs etc, which is
generally regarded as false (though from previous postings, I
think that you might be inclined to defend it).

So I don't think it's true that this argument undermines my own
case, though it's certainly a very interesting thought-experiment
to consider.  What might it be like to actually undergo this
operation?

>Again, you can use the same fading qualia argument. Either (a) there
>is some point at which, suddenly, the qualia disappear, or (b) the
>qualia fade gradually, or (c) the table lookup program possesses
>qualia, as well. Case (a) seems unlikely, since each step only
>involves removing one neuron. Case (b) also seems unlikely, since you
>would guess that the fading would be apparent to the person's
>introspection, and so she would behave differently (for instance, she
>might say "Hey, my qualia don't seem as intense as they once did.").
>That leaves possibility (c).

I'd argue for case (b), but I'd say that the person's introspective
faculty itself is gradually being degraded by the encroaching
lookup-table (note the lack of parallel with the original case;
this response would be implausible if functional organization
were preserved, as introspection just is a matter of the right
kind of functional organization), so the lack of coherence of
behaviour with qualia is not such a problem.

>On the other hand, I can imagine ways in which qualia could fade
>without our being aware of it. For example, suppose that every once
>and a while we lose consciousness for a fraction of second (all our
>qualia disappear). [...]
>
>I don't think that this model would fit the case of gradually
>replacing neurons, but I think it does show that fading qualia is at
>least conceivable.

I agree that fading qualia are conceivable.  My argument was that
fading qualia with preserved functional organization are empirically
unlikely.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


