From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:20:36 EST 1992
Article 3714 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <6188@skye.ed.ac.uk>
Date: 13 Feb 92 22:48:52 GMT
References: <1992Jan29.004822.23755@bronze.ucs.indiana.edu> <1992Jan29.190105.25334@aisb.ed.ac.uk> <1992Jan30.001623.12556@bronze.ucs.indiana.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 42

In article <1992Jan30.001623.12556@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan29.190105.25334@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

[fading qualia]

>My point is simply that the fading case and sudden disappearance
>cases seem to be quite implausible, though of course they're
>possible.  This person with the half-silicon brain would be
>conscious, but not nearly as conscious as they think they are?
>It seems to me that the most natural assumption is that fading
>and sudden disappearance are unreasonable.

Why?  If you replace a person's neurons with neurons that don't
work, what do you think would happen?

If Searle is right, computational neurons would either (1)
have the "causal powers" required for a mind, but not merely
my virtue of instantiating a program, and all would be well;
or (2) not have these powers and hence have effects like those
of broken or nonfunctional neurons.

>>But the Chinese Room doesn't have that organization.  After all,
>>it isn't a brain simulation.
>
>The Chinese room doesn't have to be a brain simulation, but it can
>be, as Searle himself grants.

I do not agree with this sort of move.  Searle presents several
arguments.  The "classic" Chinese Room is _not_ a brain simulation.
Maybe you and Searle think it could just as well be a brain
simulation, but maybe you and Searle are wrong.  To use an argument
the applies to brain simulation against the classic Chinese Room,
you have to show that it applies, not just argue that Searle would
accept it.

(Note, BTW, that now I can't use "Chinese Room" but have to
resort to "classic Chinese Room", if even that works.  So,
as I mentioned in another case, the effect of an approach can
be be to make it harder for one side to express what they need
to say.)

-- jd


