From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!jvnc.net!yale.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Thu Feb 20 15:20:38 EST 1992
Article 3718 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!jvnc.net!yale.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Feb14.000817.11818@bronze.ucs.indiana.edu>
Date: 14 Feb 92 00:08:17 GMT
References: <1992Jan29.190105.25334@aisb.ed.ac.uk> <1992Jan30.001623.12556@bronze.ucs.indiana.edu> <6188@skye.ed.ac.uk>
Organization: Indiana University
Lines: 35

In article <6188@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>Why?  If you replace a person's neurons with neurons that don't
>work, what do you think would happen?

The argument assumes that computational neurons could at least have
the same powers to cause other neurons to fire, and to cause motor
movement, that biological neurons do.  That may be a questionable
assumption, but it is orthogonal to the Chinese room argument;
Searle himself seems happy to accept it.  So these neurons
certainly "work" in the sense of causing the right firing patterns
and behaviour.  The question of whether they "work" in the sense
of "causing a mind" is of course precisely what's at issue.

>>The Chinese room doesn't have to be a brain simulation, but it can
>>be, as Searle himself grants.
>
>I do not agree with this sort of move.  Searle presents several
>arguments.  The "classic" Chinese Room is _not_ a brain simulation.
>Maybe you and Searle think it could just as well be a brain
>simulation, but maybe you and Searle are wrong.  To use an argument
>the applies to brain simulation against the classic Chinese Room,
>you have to show that it applies, not just argue that Searle would
>accept it.

Searle's argument is meant to be a universal one, applying to
any program that produces the right behaviour.  So exhibiting a
single counterexample is enough.  [Maybe you don't think that
there could be a program that performs an accurate brain simulation,
but that's a separate issue.]

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


