From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Fri Jan 31 10:27:07 EST 1992
Article 3278 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Virtual Person?
Message-ID: <1992Jan30.001623.12556@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan24.171454.7033@aisb.ed.ac.uk> <1992Jan29.004822.23755@bronze.ucs.indiana.edu> <1992Jan29.190105.25334@aisb.ed.ac.uk>
Date: Thu, 30 Jan 92 00:16:23 GMT
Lines: 52

In article <1992Jan29.190105.25334@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>>There's nothing too controversial about that assumption, given the
>>assumption that neural function is computable, which Searle is
>>prepared to grant for the sake of his argument.  
>
>Perhaps you'd better tell me where Searle grants this assumption so
>that I can check for myself that it does the work you say it does.

MBP, Brain Simulator reply, last paragraph.

>There are many properties of neurons that would be different
>in silicon.  I don't see how you can just assume that all relevant
>causal structure will be replicated, much less that there will
>be causal isomorphism.  But in any case, the idea of fading qualia
>is just something you've imagined.  If you replicate enough of
>the properites of neurons, nothing will fade.  If you don't, it
>will be like replacing neurons with something that doesn't quite
>work.  How that would seem to the person in question is not
>something we can answer.  Maybe they fade.  Maybe they're fine
>until they suddenly die.  Maybe they turn into politicians.

My point is simply that the fading case and sudden disappearance
cases seem to be quite implausible, though of course they're
possible.  This person with the half-silicon brain would be
conscious, but not nearly as conscious as they think they are?
It seems to me that the most natural assumption is that fading
and sudden disappearance are unreasonable.

>The reason people talk about replacing neurons one by one
>(rather than building a complete brain) is to get us to think
>"well, replacing one neuron would't make much difference,
>or two, or three, ... and so you could replace all of them."
>And talk of "fading qualia" is also to get us to see the
>change as continuous.
>
>But that argument isn't necessarily any better than the one that
>says there are no snowstorms because one snowflake isn't, and
>two snowflakes aren't, and so on.  

This isn't my argument, as I made clear at length a month ago.

>But the Chinese Room doesn't have that organization.  After all,
>it isn't a brain simulation.

The Chinese room doesn't have to be a brain simulation, but it can
be, as Searle himself grants.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


