From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:34:14 EST 1992
Article 2569 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <5912@skye.ed.ac.uk>
Date: 8 Jan 92 21:34:31 GMT
References: <1991Dec5.191043.10565@psych.toronto.edu> <1991Dec5.210724.12480@cs.yale.edu> <1991Dec8.192843.6951@psych.toronto.edu> <1991Dec11.170157.27053@cs.yale.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 83

In article <1991Dec11.170157.27053@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>Perhaps I can take a stab at explaining to each party to this dispute
>why its position seems so nonsensical to the other side.  I think it
>comes down to what you take the fundamental semantical scenario to be.
>Searlites tend to take as fundamental the situation in which a
>thinking agent understands that some of its symbols (or thoughts, or
>images) have meanings with respect to its environment.  ("I know what
>the word `zebra' means.")  Call this Scenario A.  Anti-Searlites tend
>to take as fundamental the situation in which an observer assigns
>meanings to an external symbol system.  ("The Zulus use the word
>`foobar' to refer to zebras.")  Call this Scenario B.

Sounds reasonable, but then the anti-Searlies seem to have little
room for the possibility that there might be behavior without
understanding.

>Now, how do Anti-Searlites explain Scenario A?  The first step is to
>deny that one's intuitive model of "grasping the meanings" of words is
>an accurate account of how language actually functions.  There are
>theories of how the meanings are to be grasped (Zeleny has sketched
>one) but it is not clear what contact such theories make with
>realistic psychological models.  (I'm sure I'll hear about it.)  If
>you actually try to construct computational models of language, then
>it becomes clear that explaining how the meanings are "grasped" is
>comparatively unimportant, 

But is that because it really is unimportant or just because
that's how it falls out in computational models?

>which is fortunate, because computers just
>push symbols around without having to worry about whether the symbols
>"give off meaning," or whatever.  Hence for Anti-Searlites, Scenario A
>and Scenario B are not that different.  In each case the
>language-using system observes the user of a symbol system and
>comments on a semantic interpretation of the symbols.  In one case the
>system observed is the same as the observer; in the other case it's
>the Zulus.

So when I use "zebra" I observe this use and comment that it's
a reference to zebras?  Or what, exactly?  Your explanation makes
little sense to me at this point.

>The comeback from the Searlites is this, obviously: How can the system
>even use the sentence "The Zulus use the word `foobar' to refer to
>zebras" when it doesn't itself grasp the meaning of "zebra"?  The
>answer is that continuous contact with the meaning of a word is not
>necessary (or possible, or even intelligible) in the computationalist
>account.  Suppose two robots are observing some Zulus and zebras, and
>robot 1 says "The Zulus use the word `foobar' to refer to zebras."
>All that we require for the sentence to have been used "correctly" is
>that the second robot connect the symbols "Zulu" and "zebra" to the
>types of entities it is now perceiving tokens of.

Connect in what way?  I don't see how this answers the Searlies,
and I'm no closer to escaping the problem I had above.

>The bottom line is that semantics is epiphenomenal, although useful in
>explaining why certain syntactic systems maneuver through a world of
>zebras so well.  It does not matter that syntax =/= semantics, because
>semantics plays no role in our use of internal symbol systems.

It's hard to see how this can be true.  I would have thought one aim
of a theory of mind would be to explain semantics, not to say it
doesn't matter.  Why should this work any better than treatment of
consciousness as an epiphenomenon did?

>Sorry to go on at such length, but I think it's important to clarify
>what the disagreement is.

I appreciate the effort, but I'm if anything more confused than
before.  I though I had some idea of what the computational approach
was about ...

>    Searle's point of the Chinese Room example is to show that
>   >while you may know what an English symbol refers to, you do *not* know
>   >what a Chinese symbol refers to, despite the fact that the behaviour is
>   >the same, that the symbols are used appropriately in both cases.
>
>At the risk of repeating myself (and McCarthy), it matters not whether
>*I* know, so long as the virtual person instantiated by the program knows.

That might be so if there were a v.p., but the claim that there
is looks a lot like begging the queastion to me.


