From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!psinntp!scylla!daryl Tue Mar 24 09:57:42 EST 1992
Article 4632 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: The Systems Reply I
Message-ID: <1992Mar20.134658.18778@oracorp.com>
Organization: ORA Corporation
Date: Fri, 20 Mar 1992 13:46:58 GMT
Lines: 65

michael@psych.toronto.edu (Michael Gemar) writes:

(In response to kohathi@world.std.com (Kathleen E Coady))

> The first premise [of the Chinese Room argument] is simply that syntactic
> manipulation cannot on its own yield semantics, which has been argued
> to be an analytic truth by many philosophers who are not involved in
> the AI debate.

Michael, in my opinion calling Searle's premise an "analytic truth" is
an abuse of the term "analytic". We can't even agree on what the
premise "syntactic manipulation cannot yield semantics" *means*, much
less agree that it is analytically true.

For example, it isn't clear how to define "syntactic manipulation" so
that what a computer does is syntactic manipulation, but the
electrochemical processes in the brain are *not* syntactic
manipulation. Similarly, there is no clear meaning of "yielding
semantics" so that, for instance, chemical processes can yield
semantics but computers cannot.

The whole idea that the impossibility of Strong AI follows from an
analytic argument seems fuzzy-minded to me. There are certain senses
of "syntax cannot yield semantics" that are, indeed analytic. For
example, it is provable that no (finite or recursively enumerable)
collection of syntactic rules can produce all the truths of
arithmetic. If this is the meaning of "syntax cannot yield semantics",
then it applies to human beings, as well; *we* don't know all the
truths of arithmetic, either. Perhaps there is some other sense of
"syntax cannot yield semantics" that is more appropriate for the AI
debate. If so, I would like to know what it is, and I would like to
know why you (or Searle) consider it to be an analytic truth.

> The second premise comes from the foundational assumptions of AI, namely,
> that all relevant human cognitive activity is computable, and can be 
> replicated by the appropriate functional relations. These relations are,
> by their nature, at base syntactic.  This premise is adopted by Searle in
> order to examine the implications of Strong AI.

> You are correct in saying that these two premises are contradictory.
> These two premises both can't be correct.

That's not true. Logically, what follows from these two premises is
that human brains are not capable of semantics. If the meaning of
"semantics" is allowed to be sufficiently fuzzy, there may be some
sense in which this is true.

> The Chinese Room is designed to give a demonstration of the truth of
> the first premise by showing that, *even if the CR gave interpretable
> answers,* the person doing the purely syntactic manipulations wouldn't
> understand.

I believe that this characterization of Searle's argument is wrong.
Searle was interested in proving that there was no understanding in
the Chinese Room, at all. If he only succeeds in proving that the
person performing the rules doesn't understand, he will have proved
nothing that is relevant to Strong AI. In the case of a computer,
nobody claims that the CPU understands (chess, English, or whatever),
and the person's role in the Chinese Room is to be the CPU.

Daryl McCullough
ORA Corp.
Ithaca, NY




