From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Mar  9 18:34:39 EST 1992
Article 4205 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Definition of understanding
Message-ID: <1992Mar3.000217.18401@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Feb28.165550.13014@psych.toronto.edu> <1992Mar2.031342.27459@bronze.ucs.indiana.edu> <44170@dime.cs.umass.edu>
Date: Tue, 3 Mar 92 00:02:17 GMT
Lines: 51

In article <44170@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>In article <1992Mar2.031342.27459@bronze.ucs.indiana.edu> 
>	chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>>
>>I gave such an argument a while ago
>>with the "fading qualia" thought-experiment (it should be clear
>>enough how this applies to the memorization case).
>
>	It is not clear to me, which maybe only shows I didn't
>pay close enough attention to your original argument.  Although
>it is not fair to request endless recapitulations, a pre'cis
>of your fading qualia argument as applied to the system memorizer
>might help raise the level of discussion.

OK.  To make things easier, let the memorizer be a tiny little demon
who runs around inside a human head, taking inputs from sensory organs,
performing her memorized computations on these, and stimulating
motor movements in the appropriate way according to the result of
the computations.

To construct the continuum from a normal brain to a memorizer, under
as always the assumption the functionality of a single neuron is
computable: to make things easier, say we've already got to the
stage where computational silicon neurons are doing all the work.
Then our demon will go around, getting rid of these neurons one
by one, and instead simulating them -- i.e. measuring the "input"
to the neuron (e.g. firing rate of nearby neurons), performing
computations, and using transducers to produce the same "output"
that the neuron would produce.  Gradually, as pairs of neighbouring
neurons are both eliminated, we'll eliminate the need for the
transducers between them, as the causal link between them can be
simulated by the demon.  Eventually, the demon has all the
"internal" computation memorized, and the only real work it has to
do involves taking inputs from sensory organs and stimulating motor
movements.  i.e., we have a straight-forward Chinese-room-style
homunculus, under the conditions of the robot reply.

The usual fading qualia arguments apply: Did the person's qualia
gradually fade as the neurons got replaced?  It seems implausible
that a conscious being could be so wrong about its qualia.  Did
the qualia suddenly disappear once a certain threshold was reached?
That seems more implausible.  The remaining alternative is that
the qualia stay around, but whereas they were once based in a
causally-connected network of neurons, they're now based in an
isomorphic causal network of computations that happens to lie inside
in the demon's brain.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


