From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!network.ucsd.edu!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:25 EST 1992
Article 4515 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!network.ucsd.edu!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: aliens eat fading qualia
Message-ID: <6416@skye.ed.ac.uk>
Date: 17 Mar 92 22:31:22 GMT
References: <1992Mar6.185522.18137@oracorp.com> <1992Mar17.044749.20941@bronze.ucs.indiana.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 90

In article <1992Mar17.044749.20941@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>I agree that fading qualia are conceivable.  My argument was that
>fading qualia with preserved functional organization are empirically
>unlikely.

Suppose you replace neurons by broken ones so that after all
the neurons have been replaced the person is not conscious.
The either consciousness blinks out or it fades.  Well, I think 
it's more likely that it does a little of both.  But would anyone
seriously make an argument that none of these are plausible?

So what's different about the "fading qualia" argument?  Only
that functional organization is maintained.  Supposedly, this is
enough for continuity of behavior.  And if we take the functional
organization to be something a computer could get merely by running
the right program, then there might be some relevance to Searle's
arguments.

But if functional organization isn't enough for consciousness,
then we're just dealing with broken neurons again.

I still don't see how fading qualia can do any damage to Searle's
arguments, and I think that this is a sufficiently isolated part
of the larger discussion that we may actually be able to reach a
resolution in this case.

One probelms with the argument is that all it shows is that
brain simulations could be conscious.  If that's the best anyone
can do against Searle, then I'm not impressed.

Another problem is that it relies on one-by-one replacement to make
it seem plausible.  We're supposed to think: surely we can replace one
neuron, and at any point surely we can replace one more.  But what
if we can't.  What if at some point (though it's hard to say exactly
where) things must have stopped working.  What if, after adding
enough snowflakes, we finally have a snowstorm.  This is why people
are right to bring up the "paradox of the heap" in connection with
this argument.  Even if we think each single replacement is plausible,
we would not have to conclude that the entire process could be carried
out from start to end.

The argument also relies on behavior staying the same in order
to make any change in consciousness seem more bizarre.  I think
it's unlikely that behavior would stay the same.  If you start
replacing someone's neurons, it's more likely that they'll die.

It should be clear that you can't just replace neurons with
fake neurons that have completely the wrong physical properties.
Chalmers seems to think that there would be an interface between
the computer simulation and the remaining neurons, and that it
would have enough of the physical properties to keep things going.

Well, let's try a thought experiment along those lines.  Let's
suppose there's an alien life form that eats brains but, to keep
from being detected, it duplicates at the interface between it
and the rest of the brain the physical effects of the neurons it 
has eaten.  Let's further suppose that it can do much more in
a small space than our brains can, so that it can duplicate the
neurons while it also talks to its friends, writes novels, and
generally has a good time.  Maybe there's a duplicate of you
(or the you-system, that is) inside this creature at the end.
But what happens to the non-duplicate you in the meantime?

Well, let's just run this by the fading qualia argument:

   Now of course the question Searle has to answer is what happens to
   the consciousness along the way.  At one end, we have full
   consciousness; at the other end, if we believe Searle, we have
   none, but what of the intermediate states?  Searle has to accept
   either (a) that consciousness suddenly blinks off at some stage; or
   (b) that it gradually fades out, with states of semi-consciousness
   along the way -- but with full functional equivalence.  For various
   reasons I don't think that either of these are too plausible.  If
   one doesn't accept the possibility of these two phenomena (suddenly
   disappearing consciousness or fading consciousness, with full
   functional equivalence), then we're led into a reductio of the
   original assumption that one end of the spectrum isn't conscious.

So either (a) you're still conscious at the end (even though _you_
have no brain left), or (b) fading or blinking out isn't so
implausible after all, or (c) it's impossible to duplicate
functionality at the interface.  (a) is false.  Both (b) and
(c) are fatal to the fading qualia arguement.

Of couse, all sorts of people will write to say (a) is true.
But think about it.  You want the argument against Searle
to _depend_ on this?  Give me the systems reply any day.

-- jd


