From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!dimacs.rutgers.edu!mips!cs.uoregon.edu!ogicse!das-news.harvard.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Tue Mar 24 09:56:32 EST 1992
Article 4528 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!dimacs.rutgers.edu!mips!cs.uoregon.edu!ogicse!das-news.harvard.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Mar17.210431.25318@oracorp.com>
Date: 17 Mar 92 21:04:31 GMT
Organization: ORA Corporation
Lines: 51

michael@psych.toronto.edu (Michael Gemar) writes:

> You don't have to know how something's done to know how something
> *isn't* done.  There is a principled argument why shuffling symbols
> won't give you meaning, namely, that syntax can't alone produce
> semantics.  Even though I don't know how semantics *does* work, I know
> it *doesn't* work through shuffling symbols.

Michael, it seems to me that you (and Jeff Dalton, and Chris Green,
and Searle) have been making the following argument.

  1. "Shuffling symbols" can never give meaning.

  2. Human thoughts have meaning.

  3. Therefore, human thoughts cannot be mere shuffling of symbols.

While I agree that there is a sense of the word "meaning" in which 1
is correct, and there is a sense of the word "meaning" in which 2 is
correct, I don't believe that they are the same sense.

One notion of the meaning of concepts is external, the relationship
between the concepts and the external world. A second notion of the
meaning of concepts is internal: the internal meaning of concepts is
determined by the relationships between the concepts.

All of the arguments that have been advanced for why symbol-shuffling
cannot produce meaning depend on the external notion of meaning. It is
quite correct that internal rules for manipulating symbols can never
unambiguously pin down the external reference of those symbols.

On the other hand, when you say that it is obvious (by introspection)
that human thoughts have meaning, it is always referring to an
internal notion of meaning. In introspection, you can't compare your
internal concept of a "tree" with a real-world tree to see if they
match; the best you can do is to see if your notions of "tree",
"green", "leaves", "plant", etc. are coherent. What else can
introspection possibly tell you?

Once we insist that the notion of "having meaning" be used
consistently for both computers and people, the little syllogism above
is much less obvious. I don't see how human thoughts have unique
external references, and I don't see why a computer program cannot
produce internal coherence. So depending on the meaning of "meaning"
you choose, either 1.--"Shuffling symbols" can never give meaning--or
2.--Human thoughts have meaning.--becomes less obvious.


Daryl McCullough
ORA Corp.
Ithaca, NY


