From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Fri Jan 31 10:26:41 EST 1992
Article 3233 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Virtual Person?
Message-ID: <1992Jan29.004822.23755@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan23.204125.23190@aisb.ed.ac.uk> <1992Jan23.221609.1443@bronze.ucs.indiana.edu> <1992Jan24.171454.7033@aisb.ed.ac.uk>
Date: Wed, 29 Jan 92 00:48:22 GMT
Lines: 72

[I posted this a few days ago, but it didn't seem to make it out.]

In article <1992Jan24.171454.7033@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>Much of the work is done by the supposition that a causally isomorphic
>silicon brain can be made.  And from the silicon brain to the Chinese
>Room is an even bigger step.  Searle would argue, I think, that either
>you replicate the required "causal powers" or you don't.  The idea of
>1/2-way states doesn't enter into it.

There's nothing too controversial about that assumption, given the
assumption that neural function is computable, which Searle is
prepared to grant for the sake of his argument.  We just replace
the neurons one by one with silicon modules, or with tiny little
homunculi doing the work, or with one homunculus who runs around
doing the work, or with a lazy homunculus who gradually decides
he doesn't need to run around so much if he can keep track of all
the internal causal relations on paper.

I'm not presupposing that these systems have identical "causal powers"
in the sense of their ability to "cause" a mind, of course.  I'm
just supposing that these systems have a causal organization that
corresponds directly to the causal organization between neurons.

>>Similarly, one would suppose, the brain has no way of figuring out
>>what its neurons "mean".
>
>How did the idea that neurons "mean" get in here?

It didn't.  The point is that the symbols in the Chinese room may
well not "mean", either.  They're just internal mediators.  Any
"meaning" may lie at a much higher level, as with neurons.

>And no less meaningless?  Anyway, the idea that symbols correspond
>to neurons seems rather odd, to say the least.

Not the way that I look at it.  These symbols are physical objects
with a complex causal structure between them, from which causal
structure behaviour is produced.  Just like neurons.

>Remember that the symbols are Chinese characters (or something like
>that).  This is clear for at least some of them, because they're being
>used for the I/O of the conversation.  Searle's point, at least some
>of the time, is that the person in the room can't figure out what
>the symbols in the conversation mean.  All he knows is that squiggle
>squiggle follows squoggle squoggle -- things like that.

Only a trivial few of the symbols will be Chinese characters -- the
peripheral ones for input/output.  These will presumably constitute
a miniscule fraction of the symbols involved.  One would presume
that any "understanding" will supervene on the internal processing,
not on the peripheral characters, just as with brains.  And if one
goes to the Robot Reply, the idea that single symbols are "meaningful"
disappears completely, with input/output "symbols" being pixels
or muscle stimulations.

A lot of the confusion surrounding the Chinese room argument, in my
opinion, stems from a conflation of two separate meanings of the word
"symbol".  "Symbol" can mean "representation" (i.e. an object that
denotes something, or bears some meaning), or it can mean "computational
token", i.e. an object that gets shoved around in an implementation
of a computer program, but these are quite distinct.  Computational
tokens certainly need not be representations (as evidenced e.g. by
connectionist systems, where representation lies at the level of
whole patterns of computational tokens).  My paper "Subsymbolic
computation and the Chinese room" (forthcoming soon in a book from
Erlbaum) goes into this in more detail.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


