From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:06:05 EDT 1992
Article 5726 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: on what meaning means
Organization: Department of Psychology, University of Toronto
References: <1992May17.071803.28448@ccu.umanitoba.ca> <1992May17.141053.7695@news.acns.nwu.edu> <1992May17.212856.2199@Princeton.EDU>
Message-ID: <1992May18.200313.23575@psych.toronto.edu>
Keywords: symbols, grounding, analog
Date: Mon, 18 May 1992 20:03:13 GMT

In article <1992May17.212856.2199@Princeton.EDU> harnad@shine.Princeton.EDU (Stevan Harnad) writes:

>A symbol system that is systematically interpretable as a
>Chinese-Chinese dictionary is just as ungrounded if it is connected to
>another symbol system that is interpretable as encyclopedic knowledge,
>and yet another one that is interpretable as objects in the real world.
>Singly and collectively, they are just squiggles and squoggles, and
>there is no way to bootstrap to meaning from that, no matter how
>systematically interepretable it all is, and no matter how coherently
>the interpretations square with one another.
>
>Grounding has to be real, through real robotic interactions with the
>real world of objects, Totally Turing-Indistinguishable from our own
>interactions with that same world.

I'm glad you've joined the debate, since I know that you've thought
a lot about the problem of symbol grounding.  However, it seems to me
that your position is subject to what can be termed "SHRDLU's Dilemma".
The problem is this:  a program can be constructed to interact with
an artificial reality (just like SHRDLU's geometric world), which, since
it is fully contained within a computer, is actually just a set of
arbitrary marks with arbitrary connections.  Such an artificial reality
has, presumably, as much semantics (namely, none) as the system you describe 
above.  But, of course, an artificial reality can mirror the "real" world
to any level of detail you choose, at least in principle.  For example, the
"objects" that SHRDLU "manipulates" could be created in its artificial
world through the combination of "atoms" plus basic rules governing
atomic interactions.  Yet, such a simulated world, even if it was simulated
down to the atomic structure, would still, at least, as I understand your
position (and those others who demand "real world" grounding) be insufficient
to attach semantics onto the symbols being manipulated.   This would be true
*even if a human would not be able to distinguish the artificial world
from the real one.* 

This situation seems to lead to various problems.  We could agree that
a human in SHRDLU's position *would* develop semantics (since the
world in which SHRDLU finds itself in can be made to be indistinguishable
from the "real" world), in which case we have a lack of equivalence
between human cognitive capabilities and implemented program (since
it is argued that SHRDLU's world, being merely a connection of
arbitrary symbols, *doesn't* have semantics for it.  This concession
grants that there is something special about humans in this regard, which I
assume that you would not be willing to do.  An alternative would be to
say that, for *both* entities, it is only the "real" world which grants
"real" semantics.  This seems a rather odd claim to make, given that
(presumably apart from certain quantum phenomena) the "real" world can
be exactly reproduced in electronic form.  What is the key difference
between the two, apart from the "reality" of the "real" world, which
could produce a lack of semantics in one case, and a presence of it 
in the other?  In any event, I am sure that a human in an artificial
reality would certainly *believe* that they had semantics, and I see no
way in which one could be wrong about such a belief!  This second
position is certainly incoherent.

A possible way out of these problems is to claim that in SHRDLU's
case we don't simply have a single implementation, but instead we have
an implementation of an intelligence, *and* an implementation of its
environment (this seems to be the solution that Antun Zirdum offers).
I don't believe this solves the problem, however, since it is *still*
the case that the implemented environment is not grounded (it is
equivalent to the encyclopedia), and, in any event, I am not sure
that there is a principled way to divide SHRDLU from its environment
(at least not, and, at the same time, be a realist about mental states). 
 
Of course, it is possible that I have misread the position of the
symbol grounding advocates, or have erred somewhere in the argument
above.  However, if I haven't, it seems to me that there are serious 
difficulties with the assertion that symbol grounding will buy an
implemented program semantics.

(By the way, thank you for the reference list you provided at the end
 of the original posting.)


- michael


