From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!mips!news.cs.indiana.edu!bronze!chalmers Tue Jan 28 12:16:50 EST 1992
Article 3081 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!mips!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan23.221609.1443@bronze.ucs.indiana.edu>
Date: 23 Jan 92 22:16:09 GMT
References: <1992Jan22.224344.7404@aisb.ed.ac.uk> <1992Jan22.235812.11080@bronze.ucs.indiana.edu> <1992Jan23.204125.23190@aisb.ed.ac.uk>
Organization: Indiana University
Lines: 46

In article <1992Jan23.204125.23190@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>Perhaps only persons can understand.  That is, maybe the shorthand
>doesn't actually shorten.

Perhaps so.  I'm only concerned with establishing that the system
might have conscious mental states.  I have no idea precisely
what the conditions for "personhood" are.  If conscious mental
states are sufficient for personhood, then fine, the system is a person.

>In any case, there has to be some reason for us to conclude that 
>there's some sort of virtual person, if we want to say anything
>more than "maybe, somehow, there's a virtual person".

Well, I've already given my "fading qualia" argument to the conclusion
that a causally isomorphic silicon brain would have conscious mental
states.  One can construct a similar continuum from the silicon
brain to the Chinese room, and from the Chinese room to the
memorized system.  I find the idea that there are "semi-conscious"
states halfway in between, while functional isomorphism is retained,
to be deeply implausible.  Of course this is only a plausibility
argument, but it's more than "maybe, somehow".

>  There's no way the system can get from syntax to semantics.
>  I, as the CPU, have no way of figuring out what any of these
>  symbols mean, but then neither does the whole system.

Similarly, one would suppose, the brain has no way of figuring out
what its neurons "mean".

>These passages give us a new point of emphasis.  The program is
>defined syntactically, but so are the symbols.  The symbols are
>manipulated as meaningless shapes.  Moreover, the person in the
>room can't figure out what they mean.  It's hard to see how the
>Chalmers argument, that programs specify a causal structure,
>addresses this point.

The presence of the person in the room is entirly irrelevant --
they are just acting as a facilitator of the causal interaction
between the symbols.  And the symbols are no more meaningless,
on the face of it, than neurons are.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


