From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Fri Jan 31 10:27:02 EST 1992
Article 3271 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan29.190105.25334@aisb.ed.ac.uk>
Date: 29 Jan 92 19:01:05 GMT
References: <1992Jan23.221609.1443@bronze.ucs.indiana.edu> <1992Jan24.171454.7033@aisb.ed.ac.uk> <1992Jan29.004822.23755@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 59

In article <1992Jan29.004822.23755@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>[I posted this a few days ago, but it didn't seem to make it out.]
>
>In article <1992Jan24.171454.7033@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>Much of the work is done by the supposition that a causally isomorphic
>>silicon brain can be made.  And from the silicon brain to the Chinese
>>Room is an even bigger step.  Searle would argue, I think, that either
>>you replicate the required "causal powers" or you don't.  The idea of
>>1/2-way states doesn't enter into it.
>
>There's nothing too controversial about that assumption, given the
>assumption that neural function is computable, which Searle is
>prepared to grant for the sake of his argument.  

Perhaps you'd better tell me where Searle grants this assumption so
that I can check for myself that it does the work you say it does.

Anyway, just to remind us, here's what I was answering:

  Well, I've already given my "fading qualia" argument to the conclusion
  that a causally isomorphic silicon brain would have conscious mental
  states.  One can construct a similar continuum from the silicon
  brain to the Chinese room, and from the Chinese room to the
  memorized system.

I think every step of your similar continuum is questionable.

There are many properties of neurons that would be different
in silicon.  I don't see how you can just assume that all relevant
causal structure will be replicated, much less that there will
be causal isomorphism.  But in any case, the idea of fading qualia
is just something you've imagined.  If you replicate enough of
the properites of neurons, nothing will fade.  If you don't, it
will be like replacing neurons with something that doesn't quite
work.  How that would seem to the person in question is not
something we can answer.  Maybe they fade.  Maybe they're fine
until they suddenly die.  Maybe they turn into politicians.

The reason people talk about replacing neurons one by one
(rather than building a complete brain) is to get us to think
"well, replacing one neuron would't make much difference,
or two, or three, ... and so you could replace all of them."
And talk of "fading qualia" is also to get us to see the
change as continuous.

But that argument isn't necessarily any better than the one that
says there are no snowstorms because one snowflake isn't, and
two snowflakes aren't, and so on.  

>I'm not presupposing that these systems have identical "causal powers"
>in the sense of their ability to "cause" a mind, of course.  I'm
>just supposing that these systems have a causal organization that
>corresponds directly to the causal organization between neurons.

But the Chinese Room doesn't have that organization.  After all,
it isn't a brain simulation.

-- jd


