From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!cs.utexas.edu!uunet!psinntp!scylla!daryl Mon May 25 14:07:07 EDT 1992
Article 5840 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sdd.hp.com!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Comments on Searle - What could causal powers be?
Message-ID: <1992May21.114817.1631@oracorp.com>
Organization: ORA Corporation
Date: Thu, 21 May 1992 11:48:17 GMT
Lines: 42

michael@psych.toronto.edu (Michael Gemar) writes:

   I can know what properties my mind has, without knowing how these
   properties are produced.  If I can also demonstrate that some
   thing cannot produce those properties, then I've got my argument.
   I have semantics.
  
   The symbols that I use to communciate in the world have inherent
   meaning - I know, since *I* am the one using them.

This argument has come up before, and I don't think it is correct.
There are two notions of "having meaning". One is internal, and the
other is external. Internally, symbols can have meaning because of the
way they relate to one another. Our word "hamburger" relates to our
memory of the taste of hamburgers, and to our stored knowledge that
hamburgers are made from cows, etc. All of these relationships involve
internal meaning, since they relate stuff in our brains to other stuff
in our brains. Your statement that you know that symbols have inherent
meaning must be referring to internal meaning, if you know it by
introspection.

The other kind of meaning is external meaning, the relationship
between symbols and some real-world objects or situations denoted by
those symbols. You cannot tell the external meaning of your thoughts
by introspection, and your thoughts do not have any *inherent* meaning
in that sense. They have a contingent meaning in that sense, from the
correlation between your internal state and the world's external
state.

It is true that "symbol shuffling" cannot possibly give rise to
inherent external meaning, but then, human thoughts don't have
inherent external meaning, either. For this reason, the "semantics
cannot arise from syntax" arguments, to the extent that they are
rigorous, don't prove anything about the impossibility of Strong AI.

Daryl McCullough
ORA Corp.
Ithaca, NY






