From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!udel!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl Mon Mar  9 18:35:28 EST 1992
Article 4287 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!udel!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Mar5.171258.11467@oracorp.com>
Date: 5 Mar 92 17:12:58 GMT
Organization: ORA Corporation
Lines: 51

chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

[fading qualia argument deleted]

Dave, it seems to me that your "fading qualia" argument works *too*
well, that is, it can be modified to be an argument for a position
that you don't want to hold. That position is sometimes called
behaviorism (but not Skinner's), the claim that if system A has the
same external behavior as system B, then they have the same mental
properties (including qualia). Let me show why I think that your
argument could be turned into an argument for behaviorism.

Imagine the following process: We start with a human brain. Now, we
replace one brain cell by a tiny computer running (surprise!) a table
lookup program, which simply looks up the appropriate neural response
to each sequence of neural inputs. Gradually, one by one, we take each
brain cell neighboring the computer, and replace the cell and the
computer by a new table lookup program that is behaviorally equivalent
to the two of them. Eventually, the entire brain is replaced by a huge
table lookup program.

Again, you can use the same fading qualia argument. Either (a) there
is some point at which, suddenly, the qualia disappear, or (b) the
qualia fade gradually, or (c) the table lookup program possesses
qualia, as well. Case (a) seems unlikely, since each step only
involves removing one neuron. Case (b) also seems unlikely, since you
would guess that the fading would be apparent to the person's
introspection, and so she would behave differently (for instance, she
might say "Hey, my qualia don't seem as intense as they once did.").
That leaves possibility (c).

On the other hand, I can imagine ways in which qualia could fade
without our being aware of it. For example, suppose that every once
and a while we lose consciousness for a fraction of second (all our
qualia disappear). But during the period of non-consciousness, our
bodies keep moving right along, behaviorally unchanged. When we come
to our senses a fraction of a second later, our memories have been
updated with whatever sense impressions our bodies received while we
were "out", so we have no memory of losing consciousness. If you admit
that possibility, then you can imagine those moments of
non-consciousness coming more and more frequently, until eventually we
have no consciousness left. But our bodies keep moving right along as
if nothing has happened.

I don't think that this model would fit the case of gradually
replacing neurons, but I think it does show that fading qualia is at
least conceivable.

Daryl McCullough
ORA Corp.
Ithaca, NY


