From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew Thu Jan 16 17:19:52 EST 1992
Article 2660 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Semantics of thoughts
Message-ID: <1992Jan13.023843.12181@cs.yale.edu>
Summary: Some say thoughts have semantics, some say they don't
Keywords: searle,consciousness,semantics
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1991Dec8.192843.6951@psych.toronto.edu> <1991Dec11.170157.27053@cs.yale.edu> <5912@skye.ed.ac.uk>
Date: Mon, 13 Jan 1992 02:38:43 GMT
Lines: 143

  In article <5912@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
  >In article <1991Dec11.170157.27053@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
  >>Perhaps I can take a stab at explaining to each party to this dispute
  >>why its position seems so nonsensical to the other side.  I think it
  >>comes down to what you take the fundamental semantical scenario to be.
  >>Searlites tend to take as fundamental the situation in which a
  >>thinking agent understands that some of its symbols (or thoughts, or
  >>images) have meanings with respect to its environment.  ("I know what
  >>the word `zebra' means.")  Call this Scenario A.  Anti-Searlites tend
  >>to take as fundamental the situation in which an observer assigns
  >>meanings to an external symbol system.  ("The Zulus use the word
  >>`foobar' to refer to zebras.")  Call this Scenario B.

  >Sounds reasonable, but then the anti-Searlies seem to have little
  >room for the possibility that there might be behavior without
  >understanding.

I assume you mean "computationalists leave little room for the
possibility that if a system behaves in all ways as if it understands
then it probably does."  That's true.  The typical computationalist
leaves less room for this possibility than the typical Searlite.

  >>Now, how do Anti-Searlites explain Scenario A?  The first step is to
  >>deny that one's intuitive model of "grasping the meanings" of words is
  >>an accurate account of how language actually functions.  There are
  >>theories of how the meanings are to be grasped (Zeleny has sketched
  >>one) but it is not clear what contact such theories make with
  >>realistic psychological models.  (I'm sure I'll hear about it.)  If
  >>you actually try to construct computational models of language, then
  >>it becomes clear that explaining how the meanings are "grasped" is
  >>comparatively unimportant, 

  >But is that because it really is unimportant or just because
  >that's how it falls out in computational models?

At this point in the discourse, I'm still sketching one side of a
dispute, so I mean the latter: "That's how it falls out."

  >>which is fortunate, because computers just
  >>push symbols around without having to worry about whether the symbols
  >>"give off meaning," or whatever.  Hence for Anti-Searlites, Scenario A
  >>and Scenario B are not that different.  In each case the
  >>language-using system observes the user of a symbol system and
  >>comments on a semantic interpretation of the symbols.  In one case the
  >>system observed is the same as the observer; in the other case it's
  >>the Zulus.
  >
  >So when I use "zebra" I observe this use and comment that it's
  >a reference to zebras?  Or what, exactly?  Your explanation makes
  >little sense to me at this point.

Sorry to be so confusing.  The contrast I'm trying to draw is this:
Let me adopt the term "semantophile" instead of "Searlite."  I mean a
person who feels that it is important that we consciously "grasp the
meanings" of the symbols we use.  (There's at least one: Exhibit A:
Mikhail Zeleny.)  Both semantophiles and computationalists agree that
semantics is possible, but only the former require that semantics play
some kind of active, continuous, functional role in the use of
symbols.  For a computationalist, a symbol works for purely syntactic
reasons, and a theory of its semantics is a plausible account of what
its symbols might be taken to refer to.

Example: Suppose we discover that frogs employ a symbol type that they
instantiate tokens of whenever a fly is around.  Actually, 5% of the
time a token is mistakenly generated by a passing seedpod or jet
plane.  Still, we would have good reasons to say that the symbol
"means" fly to the frog.  But I'm speaking as a computationalist here;
a semantophile would have trouble with the belief that any symbols
mean anything to a frog, because they can't imagine the frog "grasping
the meaning" of a symbol.

  >>Suppose two robots are observing some Zulus and zebras, and
  >>robot 1 says "The Zulus use the word `foobar' to refer to zebras."
  >>All that we require for the sentence to have been used "correctly" is
  >>that the second robot connect the symbols "Zulu" and "zebra" to the
  >>types of entities it is now perceiving tokens of.
  >
  >Connect in what way?  I don't see how this answers the Searlies,
  >and I'm no closer to escaping the problem I had above.

Connect in any of various ways!  (The frog story shows one way.)  It's
really quite simple to get symbols to covary with the things they
denote, provided that you don't insist that there be a further link of
"intentionality." 

  >>The bottom line is that semantics is epiphenomenal, although useful in
  >>explaining why certain syntactic systems maneuver through a world of
  >>zebras so well.  It does not matter that syntax =/= semantics, because
  >>semantics plays no role in our use of internal symbol systems.
  >
  >It's hard to see how this can be true.  I would have thought one aim
  >of a theory of mind would be to explain semantics, not to say it
  >doesn't matter.  Why should this work any better than treatment of
  >consciousness as an epiphenomenon did?

I'll try to be less flip: You're right that a theory of mind should
"explain semantics," but from a computational perspective that task
amounts to showing how a particular syntactic system approximates a
useful semantic mapping.  It's like explaining why a harmless snake
should come to resemble a poisonous one.  The useful semantic mapping
need not actually be present in the organism's mind in order to exert
its teleological pull, any more than the poisonous snake needs to pose
for its imitator.  (In both cases the apparent teleological pull is,
of course, actually a selective push.)

 >   In article <1991Dec30.150818.25714@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
 >   >It is the case that
 >   >
 >   > (a) People use symbols that refer to things
 >   > (b) People can make semantic theories about what agents' symbols
 >   >     refer to
 >   >
 >   >but the theories referred to in (b) play no role in the competence
 >   >described in (a).
 > 
 >   If this is the explanation of "semantics is epiphenominal", then
 >   again I find I'm getting further from understanding you rather
 >   than closer.

 >   Did anyone (Searlie or no) think having a semantic _theory_
 >   played a role in competance?  Is this the semantics Searle
 >   says syntax isn't sufficient for?

Semantophiles believe that minds must know the meanings of their
symbols.  Every Jane Doe is a semanticist, with her own words and
experiences as primary data.  It is a slight paraphrase to say that
minds must *have a theory of the meanings,* i.e., a semantic theory.
(Zeleny's proposals sound just like this, to me anyway.)  If this
seems like a big transition to the computationalist ear, that's
because we tend to misinterpret semantophiles at this point, picturing
a "theory" as some kind of machine or formal system.  But that's
exactly what the typical semantophile does not picture.

 >  What, exactly, does this
 >   have to do with what's been debated all this time.

This thread started long, long ago (in a galaxy far, far away), when
some Searlite proposed that Searle's most profound point was that
syntax and semantics were distinct.  My modest reply is that they are
indeed distinct, but that the point, from a computationalist
perspective, is not profound.

                                             -- Drew McDermott


