From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:05:40 EDT 1992
Article 5683 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Mean thoughts on what meaning means
Organization: Department of Psychology, University of Toronto
References: <1992May13.001033.14320@ccu.umanitoba.ca> <1992May14.164117.25016@psych.toronto.edu> <1992May14.221449.3721@spss.com>
Message-ID: <1992May15.152549.13330@psych.toronto.edu>
Date: Fri, 15 May 1992 15:25:49 GMT

In article <1992May14.221449.3721@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992May14.164117.25016@psych.toronto.edu> michael@psych.toronto.edu 
>(Michael Gemar) writes:
>>I can know what properties my mind has, without knowing how these
>>properties are produced.  If I can also demonstrate that some
>>thing cannot produce those properties, then I've got my argument.
>>I have semantics. 
>
>OK so far--
>
>>The symbols that I use to communciate in the world
>>have inherent meaning - I know, since *I* am the one using them.
>
>Whoa!  How do you know they have *inherent* meaning?  Introspection cannot
>tell you.  You are within the system which uses these symbols-- you *are*
>that system, in fact-- so introspection only shows they have meaning within
>this system, not inherent meaning.  

I apologize for the appallingly poor choice of wording that I made.  "Inherent"
is certainly *not* the word I wanted, as I don't believe that there is 
something about the *shape* of the symbols that I use that contains their
meaning!  Let me try to reconstruct this in a clearer way.  I *know* that
my symbols are not meaningless, since *I* define their meaning *for me*.
I know that they have semantic content.  Sure, this means that they have
meaning "within this system".  I didn't mean to imply that the symbols
I use have some sort of inherent meaning independent of me (after all,
symbols are arbitrary).  So far, all my point is is that I know that
*my* symbols have meaning.  (This, as yet, says nothing about the          
meaningfulness of symbols in another "system".)

>Symbols in an AI algorithm could have meaning in this sense, too.  To
>the algorithm, symbol XYZ represents hamburgers: it is linked to encyclopedic
>information about hamburgers, it is activated and used when the attached
>camera is pointed at a hamburger, it is linked up to other experiences
>where the system has interacted with hamburgers, it appears in the internal
>representations of sentences that contain the word "hamburger".

This is the Robot Reply.  The problem with it is that everything past
the transducers are simply symbols.  If you want to assert that transducers
are somehow important, you are pretty close to Harnad's position.

>A skeptical outside observer could claim that XYZ doesn't inherently mean
>a hamburger.  But he could claim that the word "hamburger" doesn't inherently
>mean a hamburger, either.

Again, apologies for the misleading terminology.  "Hamburger" *doesn't* 
inherently mean hamburger any more than "pain" means pain (e.g., it
means bread in French...) 


>>However, symbols in and of themselves *have* no inherent meaning - 
>>they are just "marks".  If you shuffle these marks around based
>>*solely* on their formal properties, then these marks *still*
>>do not acquire *inherent* meaning (*I* be able to interpret them,
>>but that is a different matter).  
>
>You seem to be thinking about (say) propositional logic, where you never
>look at what the symbols point to.  But we don't have to build things that
>way.  We can shuffle around symbols based on what they *point to*--
>e.g. encyclopedic information about their referents, which surely cannot
>be described as "formal properties" of the symbol itself.

If all you've got in the encyclopedia are more symbols, then your still
stuck.  Imagine trying to learn how to read Chinese from a Chinese-Chinese
dictionary.  You want to know what "squiggle-squoggle" means.  So
you look it up, and its definition reads: "Squoggle squiggle-squiggle
squaggle squoggle."  Do you now know what "squiggle-squoggle" means? 
Of course not.  Is there any way to bootstrap yourself *solely* using
the Chinese-Chinese dictionary?  No.     

This above is Harnad's example, and he takes it as an indication that
Searle is right as far as the original Chinese Room situation.  Harnad
says that what is needed is to "ground" the symbols in some way, to 
attach their meaning to the world.  This he does through the use of
transducers, in essence giving the Robot Reply.  However, this doesn't
seem to help, as the transduced "information" is symbolic once its
gets past the transducers.  It's not clear to me how this helps.

To take an alternate view on the issue, if one demands grounding of
symbols through transducers, then one is denying that implementations
such as SHRDLU, which has built into it its own artificial reality, can
actually contain meaning, since the *entire universe* for that entity
is run in a purely symbolic environment.  For poor SHRDLU, none of its
symbols are "grounded" in the real world, and therefore all it can do
is the equivalent of reading a Chinese-Chinese dictionary, with no
notion of what the symbols *really* mean.  Under the demand for
transducer grounding, SHRDLU can have no semantics.


I don't know how problematic AI folks will find this result.  They may
be quite willing to agree that a purely AR SHRDLU doesn't have semantics,
but a SHRDLU connected to the outside world does.  However, given that
with technological advances reality could be "represented" (I won't beg
the question and say "simulated") to any arbitrary degree of precision
in an artificial reality set up, this seems to have strong AI supporters
committed to some sort of rather strange claim that only *real* reality
can generate semantics, that no matter *how* close the AR is to the
real thing, it is only the real thing that can give rise to meaning.

- michael



