From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:57:00 EST 1992
Article 4570 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: aliens eat fading qualia
Message-ID: <6435@skye.ed.ac.uk>
Date: 18 Mar 92 19:06:49 GMT
References: <1992Mar17.044749.20941@bronze.ucs.indiana.edu> <6416@skye.ed.ac.uk> <1992Mar17.230254.10371@bronze.ucs.indiana.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 61

In article <1992Mar17.230254.10371@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>>Another problem is that it relies on one-by-one replacement to make
>>it seem plausible.

>I have no idea what the problem is with saying that "the entire process
>can be carried out", unless your point is that functional equivalence
>could simply not be maintained over large-scale replacement.

My point is that yr/ argument is made plausible by one-by-one
replacement, but that we shouldn't let that convince us that
the whole process is plausible.  Nor should we let one-by-one
replacement convince us that there's no effective discontinuity
(no snowstorm).

If you don't think the argument depends on gradual replacement,
why not take that out and see what it looks like?  Otherwise,
tell me why you have gradual replacement in there.

>>The argument also relies on behavior staying the same in order
>>to make any change in consciousness seem more bizarre.  I think
>>it's unlikely that behavior would stay the same.  If you start
>>replacing someone's neurons, it's more likely that they'll die.
>
>And the same here.  My argument assumes that the replacement gets
>the functional organization right (in which case it's *guaranteed*
>to get the behaviour right).

In addition you must at least supply the right physical properties
at the interface between the fake neurons and the remaining real ones.

>>Well, let's try a thought experiment along those lines.  Let's
>>suppose there's an alien life form that eats brains but, to keep
>>from being detected, it duplicates at the interface between it
>>and the rest of the brain the physical effects of the neurons it 
>>has eaten.  Let's further suppose that it can do much more in
>>a small space than our brains can, so that it can duplicate the
>>neurons while it also talks to its friends, writes novels, and
>>generally has a good time.  Maybe there's a duplicate of you
>>(or the you-system, that is) inside this creature at the end.
>>But what happens to the non-duplicate you in the meantime?
>
>For my argument to apply, it will have to preserve the functional
>organization of the neurons that it has munched

Let it do so.  What happens to the non-duplicate you?

>>Of couse, all sorts of people will write to say (a) is true.
>>But think about it.  You want the argument against Searle
>>to _depend_ on this?  Give me the systems reply any day.
>
>In case you haven't noticed, this *is* the systems reply.  It's
>not an alternative to it; it's an argument *for* it, 

When you say "is" in stars, I take it you mean something fairly
close to identity.  But an argument for X is not X, that much
should be clear.

But I don't think the systems reply needs such support.  If it
does, it's in much more trouble than I thought.

-- jd


