From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Wed Dec 18 16:02:49 EST 1991
Article 2249 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Searle's response to silicon brain?
Message-ID: <1991Dec18.193242.10535@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <40822@dime.cs.umass.edu>
Date: Wed, 18 Dec 91 19:32:42 GMT
Lines: 61

In article <40822@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:

>Can anyone tell me if Searle has reacted to the counter-
>Gedanken experiment of replacing each neuron in a brain
>with a silicon, digital, neuron simulator?  As I understand
>his position, he would have to maintain that such a modified
>human does not understand what they utter, even though their
>performance is no different from a normal human.

As far as I know Searle is agnostic on this matter.  He would simply
say that if such a machine were conscious, it would not be merely
in virtue of implementing the right program -- the Chinese room
argument shows that.

Still, I think that this can be turned into an argument against
Searle that has some force.  First, we have to replace the neurons
one by one instead of all at once.  Second, we can allow that
instead of replacing them by silicon, we can replace them by
*anything* that's functionally equivalent -- including a
Chinese-room-style simulation of a neuron that computes the neuron's
output as a function of its input, and is linked to synaptic
transmissions in the appropriate way.  In replacing the neurons,
each time a pair of connected simulated neurons comes up, we can
dispense with their synaptic connection altogether and simulate it,
as well.  Eventually we arise at a Chinese-room style simulation of
the whole nervous system, connected to the world via input/output
receptors.  Of course the details have to be spelt out a lot better
than this, but you get the idea.

Now of course the question Searle has to answer is what happens to
the consciousness along the way.  At one end, we have full
consciousness; at the other end, if we believe Searle, we have none,
but what of the intermediate states?  Searle has to accept either
(a) that consciousness suddenly blinks off at some stage; or (b)
that it gradually fades out, with states of semi-consciousness along
the way -- but with full functional equivalence.  For various reasons
I don't think that either of these are too plausible.  If one doesn't
accept the possibility of these two phenomena (suddenly disappearing
consciousness or fading consciousness, with full functional
equivalence), then we're led into a reductio of the original
assumption that one end of the spectrum isn't conscious.

Of course this is only a plausibility argument (I think that fading
qualia as described above are conceptually possible, but are unlikely
to be nomologically possible), but it does provide what Searle says
that strong AI lacks -- a positive reason to believe the Systems
reply, above and beyond one's prior commitment to the strong AI
position.  The argument is developed in a more general context in my
paper "Absent Qualia, Fading Qualia, Dancing Qualia" -- though of
course versions of this thought-experiment have been in the air for
years.

The argument won't have any force against those who believe that
neural function is non-computable, or that neurons aren't
responsible for the causation of behaviour, but Searle's argument
is supposed to be independent of those considerations.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


