From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:55:43 EST 1992
Article 4456 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: The Systems Reply I
Organization: Department of Psychology, University of Toronto
References: <1992Mar11.201637.21875@psych.toronto.edu> <1992Mar12.001918.2564@ccu.umanitoba.ca> <BL1p0D.6II@world.std.com>
Message-ID: <1992Mar14.182737.15329@psych.toronto.edu>
Date: Sat, 14 Mar 1992 18:27:37 GMT

In article <BL1p0D.6II@world.std.com> kohathi@world.std.com (Kathleen E Coady) writes:
>
>	I'm somewhat confused, I believe.  It seems to be one of the premises
>of the Chinese room that no method of imparting meaning to the symbols being
>manipulated has been supplied, and therefore it is intuitively obvious that
>the man in the room does not, merely by executing the rules, understand 
>Chinese.
>	It is also one of the premises that the Chinese room's answers to the
>questions are reasonable answers...I believe that the idea is that this 
>apparatus is capable of passing the Turing test.
>	What I do not understand is that it isn't obvious to me that these two
>premises aren't subtly contradictory [...]

The first premise is simply that syntactic manipulation cannot on its own
yield semantics, which has been argued to be an analytic truth by many
philosophers who are not involved in the AI debate.

The second premise comes from the foundational assumptions of AI, namely,
that all relevant human cognitive activity is computable, and can be 
replicated by the appropriate functional relations.  These relations are,
by their nature, at base syntactic.  This premise is adopted by Searle in
order to examine the implications of Strong AI.

You are correct in saying that these two premises are contradictory.  These
two premises both can't be correct.  The Chinese Room is designed to give
a demonstration of the truth of the first premise by showing that, *even
if the CR gave interpretable answers,* the person doing the purely
syntactic manipulations wouldn't understand.  Note that this does *not*
imply that the CR situation is *possible*, that such a situation could
be created, but merely that, *even if it were,* there would still be
no understanding generated.

The Chinese Room gedanken, as I have argued many times, is not in itself
the crucial argument.  It is merely an attempt to demonstrate the truth
of the claim that syntactic manipulations can't yield semantics.  Even
if this particular demonstration fails, the falsity of this premise has
not been established.  Since this premise *is* taken to be analytic by
many learned people, whereas the second above premise is, as far as I can
tell, merely an assertion, it seems to me that the burden of proof is
on those who wish to deny the first and assert the second to demonstate
their falsity and truth, respectively.

- michael






