From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:58 EST 1992
Article 4568 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <6433@skye.ed.ac.uk>
Date: 18 Mar 92 18:45:26 GMT
Article-I.D.: skye.6433
References: <1992Mar17.210431.25318@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 44

In article <1992Mar17.210431.25318@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>Michael, it seems to me that you (and Jeff Dalton, and Chris Green,
>and Searle) have been making the following argument.
>
>  1. "Shuffling symbols" can never give meaning.
>
>  2. Human thoughts have meaning.
>
>  3. Therefore, human thoughts cannot be mere shuffling of symbols.
>
>While I agree that there is a sense of the word "meaning" in which 1
>is correct, and there is a sense of the word "meaning" in which 2 is
>correct, I don't believe that they are the same sense.

Well, if there is some kind of equivocation going on, the argument
doesn't hold up.  But as far as any arguments I've made are concerned,
I don't think I've been equivocating.  By "meaning", I've always
meant something that involved reference (ie, your "external meaning").

>One notion of the meaning of concepts is external, the relationship
>between the concepts and the external world. A second notion of the
>meaning of concepts is internal: the internal meaning of concepts is
>determined by the relationships between the concepts.
>
>All of the arguments that have been advanced for why symbol-shuffling
>cannot produce meaning depend on the external notion of meaning. It is
>quite correct that internal rules for manipulating symbols can never
>unambiguously pin down the external reference of those symbols.

So how is it that humans manage to get "cats" to refer to cats,
and not to cherries?  Or do we?

>On the other hand, when you say that it is obvious (by introspection)
>that human thoughts have meaning, it is always referring to an
>internal notion of meaning. In introspection, you can't compare your
>internal concept of a "tree" with a real-world tree to see if they
>match; the best you can do is to see if your notions of "tree",
>"green", "leaves", "plant", etc. are coherent. What else can
>introspection possibly tell you?

I'm not sure I'd actually say it's obvious by introspection that
humans thoughts have meaning (in either sense).  But in any case,
when I say human thoughts have meaning (or that humans know what
words mean), I don't just mean coherence.


