From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:17:24 EST 1992
Article 3121 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan24.171454.7033@aisb.ed.ac.uk>
Date: 24 Jan 92 17:14:54 GMT
References: <1992Jan22.235812.11080@bronze.ucs.indiana.edu> <1992Jan23.204125.23190@aisb.ed.ac.uk> <1992Jan23.221609.1443@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 62

In article <1992Jan23.221609.1443@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan23.204125.23190@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In any case, there has to be some reason for us to conclude that 
>>there's some sort of virtual person, if we want to say anything
>>more than "maybe, somehow, there's a virtual person".
>
>Well, I've already given my "fading qualia" argument to the conclusion
>that a causally isomorphic silicon brain would have conscious mental
>states.  One can construct a similar continuum from the silicon
>brain to the Chinese room, and from the Chinese room to the
>memorized system.  I find the idea that there are "semi-conscious"
>states halfway in between, while functional isomorphism is retained,
>to be deeply implausible.  Of course this is only a plausibility
>argument, but it's more than "maybe, somehow".

Much of the work is done by the supposition that a causally isomorphic
silicon brain can be made.  And from the silicon brain to the Chinese
Room is an even bigger step.  Searle would argue, I think, that either
you replicate the required "causal powers" or you don't.  The idea of
1/2-way states doesn't enter into it.

The only way I can see to get an isomprphism is to do it at such
a high level that it becomes hard to see that it helps.  If you
assume that a high-level functional ismorphism is what matters,
it may all look reasonable; but that's something that has to be
shown, not assumed.

>>  There's no way the system can get from syntax to semantics.
>>  I, as the CPU, have no way of figuring out what any of these
>>  symbols mean, but then neither does the whole system.
>
>Similarly, one would suppose, the brain has no way of figuring out
>what its neurons "mean".

How did the idea that neurons "mean" get in here?

>>These passages give us a new point of emphasis.  The program is
>>defined syntactically, but so are the symbols.  The symbols are
>>manipulated as meaningless shapes.  Moreover, the person in the
>>room can't figure out what they mean.  It's hard to see how the
>>Chalmers argument, that programs specify a causal structure,
>>addresses this point.
>
>The presence of the person in the room is entirly irrelevant --
>they are just acting as a facilitator of the causal interaction
>between the symbols.  And the symbols are no more meaningless,
>on the face of it, than neurons are.

And no less meaningless?  Anyway, the idea that symbols correspond
to neurons seems rather odd, to say the least.

Remember that the symbols are Chinese characters (or something like
that).  This is clear for at least some of them, because they're being
used for the I/O of the conversation.  Searle's point, at least some
of the time, is that the person in the room can't figure out what
the symbols in the conversation mean.  All he knows is that squiggle
squiggle follows squoggle squoggle -- things like that.

That programs specify a causal structure doesn't look like it
addresses this at all.

-- jd


