From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Mar 24 09:55:14 EST 1992
Article 4417 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Chinese room miscellanea
Message-ID: <1992Mar11.231804.13992@bronze.ucs.indiana.edu>
Organization: Indiana University
Date: Wed, 11 Mar 92 23:18:04 GMT

Christopher Green writes:

>Even if (and I suspect this is what you have in
>mind) the person could come to abduce some sort of hypotheses about
>the meaning of the symbols, this avenue is not open to the AI side
>of the debate, because under strong AI, EVEN IF THE MAN'S SHORT TERM
>MEMORY WERE WIPED OUT AFTER EACH QUESTION-EVENT, they would be committed
>to the view that this system understands in EXACTLY the same way a  
>native Chinese speaker does. It's the FUNCITON -- not the abduction --
>that counts, for strong functionalism. (Actually, the only functionalist
>I know who is actually this strong is Dave Chalmers and John McCarthy.
>Even Fodor defers to ignorance when it comes to the implications of
>functionalism for, say, qualia.)

Alas, you don't meet the right people then.  Any number of people, in AI
and philosophy, hold just this view.  The man's short-term memory, of
course, is 100% irrelevant to the understanding of the Chinese-speaking
system (except insofar as it plays a role in implementing the system).
I don't know why you say "even Fodor", as he's certainly not your
paradigm functionalist (he's even repudiated functionalism about content).
Any number of others, from Shoemaker to Dennett, think that functionalism
can provide a good account of qualia; most of them are much more sanguine
about it than I am.  And of course most people in philosophy have kept
the issues of "understanding" and qualia entirely separate; personally,
I find it much easier to be a functionalist about beliefs, say, than
about qualia.

--------------

Chris Green again:

>The man in the Room (consciously) memorizes all of the rules and the
>shapes of all the symbols.  Then he (consciously) implements those
>rules in attempting to construct Chinese answers to the Chinese questions
>he receives.  In doing this, he satisfies the requirements of being
>a Turing machine.  Becasue his answers are indistinguishible from those
>that would be given by a native Chinese speaker, he also passes the Turing
>test. We are now, under the TT, expected to say that he understands
>Chinese.

(1) For the zillionth time, the Turing test is not essential to the
strong AI hypothesis.  (2)  According to strong AI, *implementations*
of *systems* (or of programs, or of FSAs, or whatever) are the subjects
of understanding.  In this situation we have two distinct implemented
systems.  The man is an implementation of (say) a big FSA T1, and it's
in virtue of implementing that FSA that he understands.  There is also
a smaller FSA, T2, which happens to be implemented in virtue of some of
the man's manipulations or calculations.  The implementations of T1 and
T2 are distinct systems (although they overlap, and in fact the
physical implementational base of T2 is a subset of that of T1);
therefore, strong AI predicts that there will be distinct subjects
of understanding.  In particular, it's incorrect to suggest that
strong AI suggests that the man (T1) should understand Chinese (which
is a property of T2).

>have no reference for him, though his English symbols do. Everything
>so far has been conscious and above board. At this point, Hofstadter
>and Dennett, or at least the charicatures of them that have been
>inhabiting this discussion of late, want to claim that he understands
>Chinese, only unconsciously? Why suddenly unconsious? For no reason
>at all except that such understanding, if it exists at all, is patently 
>not in the man's consciousness. There are no other motivations or
>implications of this move at all. I see no way of interpreting this move
>other than as a patently ad hoc attempt to shore up a flagging hypothesis.
>It corresponds brilliantly with Lakatos' distinction between ad hoc
>and auxilliary hypotheses.  Auxilliary hypotheses, though they complicate
>the original theory, advance the research program by implying new empirical 
>consequences to be tested. Ad hoc hypotheses complicate the original
>position to no empirical profit other than shoring up a recently
>disconfirmed hypothesis. If this is a "meta-argument" then so be it.
>What it provides is a reasoned explanation of the puzzling move to
>"unconsciousness" on the part of Searle's opponents.

Pop philosophy of science is a lot of fun, but it's entirely irrelevant
here.  Strong AI (and, I think, Hofstadter and Dennett) does not imply
that the man "unconsciously understands" Chinese; it implies that there
will be a distinct system that understands Chinese.  No "auxiliary
hypotheses" are required -- it falls straight out of the basic framework
of strong AI, as above.

--------------

Paul Barton-Davis writes:

>Chris, as you yourself not below, hardly anyone in the AI field is
>into what Searle called "strong AI" anymore. It is simply incredibly
>deceitful to imply that the entire AI community believes the tenets of
>strong AI, when for a start, most of the connectionist community is
>clearly closer to Searle's weak AI.

I'm with Chris on this one.  Any number of AI researchers hold the
strong position, including a lot of people who are sympathetic
with connectionism (myself, for one).  The connectionist/symbolic
dispute is mostly orthogonal to the strong/weak dispute; it's mostly
concerned with the *class* of algorithms required to achieve AI.

-------------

Michael Gemar writes:

>Chalmers has complained that discussion of the Chinese Room argument 
>never advances to any great degree. 

I didn't say this.  I said that it's difficult to advance the argument.
The discussion here the last few weeks has been a perfect example
of pure rehash.  (By contrast, the discussion around the end of last
year managed to touch on some novel points.)

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


