From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Mon Dec 16 11:01:07 EST 1991
Article 2039 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec11.170157.27053@cs.yale.edu>
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1991Dec5.191043.10565@psych.toronto.edu> <1991Dec5.210724.12480@cs.yale.edu> <1991Dec8.192843.6951@psych.toronto.edu>
Date: Wed, 11 Dec 1991 17:01:57 GMT
Lines: 148

   In article <1991Dec8.192843.6951@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
   >In article <1991Dec5.210724.12480@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
   >>
   >>It's a logical point, maybe even a terminological point, and hence
   >>entirely independent of the Chinese Room argument, as I'm sure Searle
   >>is aware.
   >
   >Once again, the Chinese Room *demonstration* is *NOT* the argument,
   >at least as I understand it.  (How can an *example* be an *argument?)
   >The Chinese Room merely attempts to demonstrate the truth that 
   >syntax (symbol manipulation) is not sufficient for semantics, by having
   >a person imagine doing purely syntactic symbol manipulation.

Let us grant Searle's claim that syntax is not the same as semantics.
We can now list several examples, starting with Peano arithmetic, and
they would all confirm that point.  We can even do the Chinese Room
with a computer instead of Searle, and, sure enough, the syntax and
semantics of the symbols it manipulates would be quite different.
Alas, it is still the case that ---

   >>  Unfortunately, it settles nothing.  Suppose we agree that
   >>syntax =/= semantics.  Then those who believe in "strong AI" believe
   >>that the semantic abilities of people do not transcend those of
   >>computers equipped with similar sensors, effectors, and reasoners. 
   >
   >OK, although this is a "belief", and not an argument.

   >>  To
   >>the degree that they grant that the computer cannot bootstrap itself
   >>into being able to refer to objects, they also believe that people
   >>cannot do it.
   >
   >Again, this is an article of faith necessary for the strong AI program
   >to proceed, but what you have presented here is not an argument.

It was not intended to be an argument.  It was an attempt to
communicate to the "Searlies" why Searle's claim lacks punch to a
certain group of people.  I would grant that this is an article of
faith (to an extent, but see below).

   >>  Of course, they believe that people can do something
   >>like "use the same symbol in the presence of the same object most of
   >>the time," and that this is the closest any system can get to being
   >>able to "refer" to objects, notwithstanding our introspective
   >>certainty that (Axiom 2) "our minds have mental contents (semantics)"
   >>and we possess "knowledge of what [our symbols] mean" (to quote from
   >>Searle's Scientific American article).
   >
   >But certainly these last points are not trivial.  While I may only have
   >behaviour to go on for *your* understanding, I *know* what understanding
   >means *to me*, 

In the sense, I assume, that you know it when you see it.

   and I *know* that when I use symbols that they refer to
   >something.  

Of course, if AI is correct, then most of the time you are not even
aware of when you use symbols.  From a computationalist point of view,
it's quite odd to focus on language as the paradigmatic symbol system.
AI employs symbol systems all over the place, even in modeling
creatures with no linguistic ability.

In fact, in Searle's hypothetical scenario, we should assume that
almost all the symbols being manipulated are not themselves Chinese
characters, but are internal tokens of various sorts.  It is not at
all clear what the semantic status of these tokens is supposed to be.
(Searle presumably takes no position on this question, because he
doubts they exist.)  Whatever their semantic status, they are by
hypothesis manipulable by purely syntactic rules (except possibly at
the boundaries where they are generated by incoming data and in turn cause
effectors to move).  

Perhaps I can take a stab at explaining to each party to this dispute
why its position seems so nonsensical to the other side.  I think it
comes down to what you take the fundamental semantical scenario to be.
Searlites tend to take as fundamental the situation in which a
thinking agent understands that some of its symbols (or thoughts, or
images) have meanings with respect to its environment.  ("I know what
the word `zebra' means.")  Call this Scenario A.  Anti-Searlites tend
to take as fundamental the situation in which an observer assigns
meanings to an external symbol system.  ("The Zulus use the word
`foobar' to refer to zebras.")  Call this Scenario B.  For the
Searlites, Scenario B is to be explained by induction or metaphor.  In
the case of the Zulus, who are fellow humans, I surmise that they
understand "foobar" in much the same way that I understand "zebra."
In the case of computational systems, a B-type scenario is merely a
metaphor.  If I say that "This program uses G101 to refer to zebras,"
then either I am just being sloppy, or what I really mean is that the
humans observing the program take G101 to refer to zebras, and will
fix the program if it stops using G101 in a way that supports this
interpretation.

Now, how do Anti-Searlites explain Scenario A?  The first step is to
deny that one's intuitive model of "grasping the meanings" of words is
an accurate account of how language actually functions.  There are
theories of how the meanings are to be grasped (Zeleny has sketched
one) but it is not clear what contact such theories make with
realistic psychological models.  (I'm sure I'll hear about it.)  If
you actually try to construct computational models of language, then
it becomes clear that explaining how the meanings are "grasped" is
comparatively unimportant, which is fortunate, because computers just
push symbols around without having to worry about whether the symbols
"give off meaning," or whatever.  Hence for Anti-Searlites, Scenario A
and Scenario B are not that different.  In each case the
language-using system observes the user of a symbol system and
comments on a semantic interpretation of the symbols.  In one case the
system observed is the same as the observer; in the other case it's
the Zulus.

The comeback from the Searlites is this, obviously: How can the system
even use the sentence "The Zulus use the word `foobar' to refer to
zebras" when it doesn't itself grasp the meaning of "zebra"?  The
answer is that continuous contact with the meaning of a word is not
necessary (or possible, or even intelligible) in the computationalist
account.  Suppose two robots are observing some Zulus and zebras, and
robot 1 says "The Zulus use the word `foobar' to refer to zebras."
All that we require for the sentence to have been used "correctly" is
that the second robot connect the symbols "Zulu" and "zebra" to the
types of entities it is now perceiving tokens of.  (Puzzles about what
if the apparent zebras are really holograms, which cause such trouble
for Searlites and Putnamites, do not arise for the computationalist.)

We may well want to construct a semantic theory about the robots'
symbols, *but*

   > so can the robots, if they have a reason;
   > neither our theories nor the robots can be expected to work all
the time, nor is there a need for them to work all the time.  

The bottom line is that semantics is epiphenomenal, although useful in
explaining why certain syntactic systems maneuver through a world of
zebras so well.  It does not matter that syntax =/= semantics, because
semantics plays no role in our use of internal symbol systems.

Sorry to go on at such length, but I think it's important to clarify
what the disagreement is.

    Searle's point of the Chinese Room example is to show that
   >while you may know what an English symbol refers to, you do *not* know
   >what a Chinese symbol refers to, despite the fact that the behaviour is
   >the same, that the symbols are used appropriately in both cases.

At the risk of repeating myself (and McCarthy), it matters not whether
*I* know, so long as the virtual person instantiated by the program knows.

                                             -- Drew McDermott



