From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:22:04 EST 1992
Article 3860 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <6209@skye.ed.ac.uk>
Date: 19 Feb 92 00:20:18 GMT
References: <1992Jan30.001623.12556@bronze.ucs.indiana.edu> <6188@skye.ed.ac.uk> <1992Feb14.000817.11818@bronze.ucs.indiana.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 104

In article <1992Feb14.000817.11818@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <6188@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>Why?  If you replace a person's neurons with neurons that don't
>>work, what do you think would happen?

I still think this is the right way to think of it, even taking
into account the point about qualia fading while the behavior
remains (the "sees pink, says red" argument).  See below.

>The argument assumes that computational neurons could at least have
>the same powers to cause other neurons to fire, and to cause motor
>movement, that biological neurons do.  That may be a questionable
>assumption, but it is orthogonal to the Chinese room argument;
>Searle himself seems happy to accept it.  

Does Searle accept that artificial neurons could be made that
would cause other neurons to fire merely because the artificial
neurons instantiate the right computer program?

I don't think he does.  And as soon as you have to have the
right physical properties, you can't say it's the same as a
computer running a brain simulation.  And I don't think the
neurons should be considered as simply computational.

Moreover, there's a lot more going on in the brain than neurons
firing.  Even if we grant that all of it could be simulated in
a computer, that would do almost nothing towards showing that
a real neuron could be replaced by something other than another
real neuron.

Or maybe it possible to have artificial neurons -- but not ones
that (a) have the physical properties needed for behavior and
for firing other neurons and (b) lack the physical properties
necessary for mind.

Maybe Searle thinks it's possible to have a kind of brain
damage (in effect) that removes mind but allows behavior to
go on as before; but that's hardly a reason for us to agree.

>                                          So these neurons
>certainly "work" in the sense of causing the right firing patterns
>and behaviour.  The question of whether they "work" in the sense
>of "causing a mind" is of course precisely what's at issue.

Ok, so your "fading" question is: could the mind fade though the
behavior remained?

So again think of neurons that don't work.  Maybe they work well
enough for some things to go, but not others.  There are many cases
where brain damage removes an ability that one wouldn't have thought
was so easily isolated.  (Insert usual Sachs (?sp) and Lauria
references.)  Maybe it's possible for people to start behaving
automatically w/o consciousness.  Maybe it isn't.  Maybe people
can start to have different qualia, but find they cannot use
the words that seem right any more.  Maybe not.

But nothing says consciousness or qualia have to fade, gradually.
What happens when your neurons are replaced by ones that don't
quite work?  Does it have to get gradually worse?  Maybe certain
features just drop out suddenly, or else become unreliable
rather than the-same-but-weaker.

We have to be careful of arguments that ask us to consider a change in
tiny steps.  Consider some brain damage that removes the ability to
speak.  Consider it happening neuron by neuron.  What happens?  Who
knows?  Maybe all is fine until suddently ... or maybe odd things
start happening along the way.  But it would be quite wrong to use
such possibilities to argue that brain damage can't cause someone to
use lose the ability to speak.

But suppose brain damage that removes mind but leaves behavior intact
is impossible.  How would that do anything to show that mind was
computational?

>>>The Chinese room doesn't have to be a brain simulation, but it can
>>>be, as Searle himself grants.
>>
>>I do not agree with this sort of move.  Searle presents several
>>arguments.  The "classic" Chinese Room is _not_ a brain simulation.
>>Maybe you and Searle think it could just as well be a brain
>>simulation, but maybe you and Searle are wrong.  To use an argument
>>the applies to brain simulation against the classic Chinese Room,
>>you have to show that it applies, not just argue that Searle would
>>accept it.
>
>Searle's argument is meant to be a universal one, applying to
>any program that produces the right behaviour.  So exhibiting a
>single counterexample is enough.

That depends.  The counterexample might work only for that one
case.  I'm not even sure that Searle's argument needs to be
changed to limit the effects of the counterexample.  That's
why I mention that the CR Classic is not a brain simulation.
However, I will agree that this is a fairly weak argument.
(I still don't agree with that _sort_ of move, though.)

In any case, I think the more dubious step is in going from artificial
neurons to brain simulation programs.  Brain simulations don't have to
be able to cause any real neurons to fire.  Indeed, we might be able
to write a brain simulation but not have a clue how to make an
artificial neuron.

-- jd


