From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!darwin.sura.net!convex!mips.mitek.com!spssig.spss.com!markrose Mon May 25 14:05:45 EDT 1992
Article 5691 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!darwin.sura.net!convex!mips.mitek.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Mean thoughts on what meaning means
Message-ID: <1992May16.003049.6758@spss.com>
Date: Sat, 16 May 1992 00:30:49 GMT
References: <1992May14.164117.25016@psych.toronto.edu> <1992May14.221449.3721@spss.com> <1992May15.152549.13330@psych.toronto.edu>
Nntp-Posting-Host: spssrs7.spss.com
Organization: SPSS Inc.
Lines: 72

In article <1992May15.152549.13330@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) writes:
>If all you've got in the encyclopedia are more symbols, then your still
>stuck.  Imagine trying to learn how to read Chinese from a Chinese-Chinese
>dictionary.  You want to know what "squiggle-squoggle" means.  So
>you look it up, and its definition reads: "Squoggle squiggle-squiggle
>squaggle squoggle."  Do you now know what "squiggle-squoggle" means? 
>Of course not.  Is there any way to bootstrap yourself *solely* using
>the Chinese-Chinese dictionary?  No.     
>
>This above is Harnad's example, and he takes it as an indication that
>Searle is right as far as the original Chinese Room situation.  Harnad
>says that what is needed is to "ground" the symbols in some way, to 
>attach their meaning to the world.  This he does through the use of
>transducers, in essence giving the Robot Reply.  However, this doesn't
>seem to help, as the transduced "information" is symbolic once its
>gets past the transducers.  It's not clear to me how this helps.

Amazing thing, transducers.  There's something grounded to the world on
one side, and meaningless symbols on the other.  Very interesting; 
apparently a physical device can desemanticize its inputs.  This suggests
that we could build a resemanticizer to turn symbols back into 
meaning-bearing stuff...

Better yet, why don't we move the boundaries of the system outward a bit,
so the transducers are part of the system?  Does the system *including
transducers* understand?  Note that we are no longer talking about a
system of pure symbol manipulation (tho' we haven't added much more).

Or let's look at it another way: why do you want to consider the output
of the transducers as merely symbolic?  Because in a digital computer the
output is in the form of binary data, identical in low-level form to
program code or data?  Well, what if we invent a new computer that can
manipulate photographs, sound recordings, etc., directly, *without* so
translating?  (That is, where a traditional computer is restricted to
operations on bit patterns, our new computer can operate on photographs etc.)
Now we can throw out the transducers; and they will no longer be able to
skim off the world-grounding nature of the inputs.  Does the system
understand now?

If you think such a computer isn't possible, I'll give you one: Searle
sitting in the Chinese room.  If the rulebooks contain pictures which
Searle can process, it's much harder to maintain that there is no way
to ground the Room's symbols in reality.

To put it another way, yes, perhaps you could learn Chinese from a Chinese-
Chinese dictionary-- if it were illustrated.

Actually, I don't really believe that transducers remove world-grounding.
After all, our own brains only have access to the world through a similar
mechanism: sounds, pictures, sensations, all are transmitted to the brain
through the medium of neuron firings.  If we can be grounded to the world
with such a system, so can a robot.

>To take an alternate view on the issue, if one demands grounding of
>symbols through transducers, then one is denying that implementations
>such as SHRDLU, which has built into it its own artificial reality, can
>actually contain meaning, since the *entire universe* for that entity
>is run in a purely symbolic environment.  For poor SHRDLU, none of its
>symbols are "grounded" in the real world, and therefore all it can do
>is the equivalent of reading a Chinese-Chinese dictionary, with no
>notion of what the symbols *really* mean.  Under the demand for
>transducer grounding, SHRDLU can have no semantics.

Why not say that SHRDLU is grounded in a virtual world, not the real world?
Or, if you like, divide the program into two systems, the world simulator
and the AI algorithm.  If the latter is correctly designed, it shouldn't
matter if its i/o relates to the real world or to a virtual world.  The
algorithm portion is grounded in whichever world it's connected to.

By the way, do you have a citation for Harnad's argument which you summarize
above?  It sounds like something I'd like to read.


