From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!psuvax1!wupost!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Mar 24 09:56:27 EST 1992
Article 4519 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!psuvax1!wupost!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: aliens eat fading qualia
Message-ID: <1992Mar17.230254.10371@bronze.ucs.indiana.edu>
Date: 17 Mar 92 23:02:54 GMT
References: <1992Mar6.185522.18137@oracorp.com> <1992Mar17.044749.20941@bronze.ucs.indiana.edu> <6416@skye.ed.ac.uk>
Organization: Indiana University
Lines: 76

In article <6416@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>Suppose you replace neurons by broken ones so that after all
>the neurons have been replaced the person is not conscious.
>The either consciousness blinks out or it fades.  Well, I think 
>it's more likely that it does a little of both.  But would anyone
>seriously make an argument that none of these are plausible?

If the "broken" neurons are functionally inequivalent (so that
behaviour degraded, among other things), then it's quite
plausible that qualia will fade.  My argument is that *if*
the replacement units are functionally equivalent to the old
ones (and maybe you can dispute that this is possible), then
it's *more* plausible that qualia will stick around than that
they will fade or suddenly disappear.

>Another problem is that it relies on one-by-one replacement to make
>it seem plausible.  We're supposed to think: surely we can replace one
>neuron, and at any point surely we can replace one more.  But what
>if we can't.  What if at some point (though it's hard to say exactly
>where) things must have stopped working.  What if, after adding
>enough snowflakes, we finally have a snowstorm.  This is why people
>are right to bring up the "paradox of the heap" in connection with
>this argument.  Even if we think each single replacement is plausible,
>we would not have to conclude that the entire process could be carried
>out from start to end.

I have no idea what the problem is with saying that "the entire process
can be carried out", unless your point is that functional equivalence
could simply not be maintained over large-scale replacement.

>The argument also relies on behavior staying the same in order
>to make any change in consciousness seem more bizarre.  I think
>it's unlikely that behavior would stay the same.  If you start
>replacing someone's neurons, it's more likely that they'll die.

And the same here.  My argument assumes that the replacement gets
the functional organization right (in which case it's *guaranteed*
to get the behaviour right).  Maybe you have some reason to
question that, but if so (a) I don't see it, and (b) it won't
be an argument against functionalism per se (which argues that
*if* you get functional organization right, you get the qualia
right).

>Well, let's try a thought experiment along those lines.  Let's
>suppose there's an alien life form that eats brains but, to keep
>from being detected, it duplicates at the interface between it
>and the rest of the brain the physical effects of the neurons it 
>has eaten.  Let's further suppose that it can do much more in
>a small space than our brains can, so that it can duplicate the
>neurons while it also talks to its friends, writes novels, and
>generally has a good time.  Maybe there's a duplicate of you
>(or the you-system, that is) inside this creature at the end.
>But what happens to the non-duplicate you in the meantime?

For my argument to apply, it will have to preserve the functional
organization of the neurons that it has munched (if it only
preserves the properties of the interface, then we might be
back in McCullough's situation of the lookup-table-muncher),
as this is required to preserve the belief-structure of
the system.

>Of couse, all sorts of people will write to say (a) is true.
>But think about it.  You want the argument against Searle
>to _depend_ on this?  Give me the systems reply any day.

In case you haven't noticed, this *is* the systems reply.  It's
not an alternative to it; it's an argument *for* it, trying to
show that (contra Searle) the systems reply is more than an ad
hoc assumption, and has some plausibility that doesn't derive
from a mere blanket assumption of the strong AI hypothesis.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


