From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Mon Dec  9 10:48:18 EST 1991
Article 1890 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec5.210724.12480@cs.yale.edu>
Date: 5 Dec 91 21:07:24 GMT
References: <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu> <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> <1991Dec5.191043.10565@psych.toronto.edu>
Sender: news@cs.yale.edu (Usenet News)
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
Lines: 36
Nntp-Posting-Host: atlantis.ai.cs.yale.edu


   In article <1991Dec5.191043.10565@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
   >While all of the discussion here around the Chinese Room example has been
   >at times inventive, it seems to me that the anti-Searle forces for the
   >most part miss the distinction that can be drawn between Searle's
   >*logical argument*, namely, that syntax is not sufficient for semantics, and
   >his *demonstration*, or *thought experiment*, namely, the Chinese Room.

Searle has two arguments, the original Chinese Room argument, and the
Scientific American argument, with "Axioms" and "Conclusions."  I
think he finally realized just how silly the first one was, and came
up with the second to compensate.  

   >The strength of Searle's arugment is that, contrary to what some may claim,
   >it does not rest on any particular way of telling the Chinese Room story.  The
   >argument simply is that it is impossible to generate semantics from a purely
   >syntactic system.  This, Searle argues, is a *logical* point, true simply in
   >virtue of what the words "syntax" and "semantics" mean.  

It's a logical point, maybe even a terminological point, and hence
entirely independent of the Chinese Room argument, as I'm sure Searle
is aware.  Unfortunately, it settles nothing.  Suppose we agree that
syntax =/= semantics.  Then those who believe in "strong AI" believe
that the semantic abilities of people do not transcend those of
computers equipped with similar sensors, effectors, and reasoners.  To
the degree that they grant that the computer cannot bootstrap itself
into being able to refer to objects, they also believe that people
cannot do it.  Of course, they believe that people can do something
like "use the same symbol in the presence of the same object most of
the time," and that this is the closest any system can get to being
able to "refer" to objects, notwithstanding our introspective
certainty that (Axiom 2) "our minds have mental contents (semantics)"
and we possess "knowledge of what [our symbols] mean" (to quote from
Searle's Scientific American article).

                                             -- Drew McDermott


