From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:06:17 EDT 1992
Article 5749 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Organization: Department of Psychology, University of Toronto
References: <1992May19.003821.9450@Princeton.EDU>
Message-ID: <1992May19.220141.29649@psych.toronto.edu>
Keywords: symbol, analog, Turing Test, robotics
Date: Tue, 19 May 1992 22:01:41 GMT

In article <1992May19.003821.9450@Princeton.EDU> harnad@shine.Princeton.EDU (Stevan Harnad) writes:
>In article <1992May18.200313.23575@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>it seems to me
>>that your position is subject to what can be termed "SHRDLU's Dilemma".
>>The problem is this:  a program can be constructed to interact with
>>an artificial reality (just like SHRDLU's geometric world), which, since
>>it is fully contained within a computer, is actually just a set of
>>arbitrary marks with arbitrary connections.  Such an artificial reality
>>has, presumably, as much semantics (namely, none) as the system you describe 
>>above.  But, of course, an artificial reality can mirror the "real" world
>>to any level of detail you choose, at least in principle.  For example, the
>>"objects" that SHRDLU "manipulates" could be created in its artificial
>>world through the combination of "atoms" plus basic rules governing
>>atomic interactions.  Yet, such a simulated world, even if it was simulated
>>down to the atomic structure, would still, at least, as I understand your
>>position (and those others who demand "real world" grounding) be insufficient
>>to attach semantics onto the symbols being manipulated.   This would be true
>>*even if a human would not be able to distinguish the artificial world
>>from the real one.* 
>
>All you have to remember is one thing to keep all intuitions straight
>on these questions:
>
>A computer simulation of an analog object or state is not the same
>as that object/state despite the fact that it is computationally
>equivalent to it: A simulated plane does not really fly, a simulated
>furnace does not really burn, there is no real motion in a simulated
>solar system; by the same token, there is no real thinking in a 
>simulated nervous system. Computational equivalence is not the same
>as identity.

Agreed.

>The world of objects is analog; substantial parts of the nervous
>system (and its tranducers/effectors necessarily) are analog.
>Computer simulations of them can help us to understand,
>predict and explain both the world of objects and the nervous
>system, but not by BEING the world or the brain, just by being
>a formal model of them.

You seem to place great weight on the analog nature of the physical
world; indeed, it seems as though it is this aspect upon which
you rest symbol grounding.  However, I don't think that it will
bear the weight, for a number of reasons.  First of all, it is not
clear to me that the fundamental nature of the physical world
*is* analog.  Atoms are certainly discrete particles, and I believe
that some physicists have theorized that space and time may be
discrete.  Whether or not this is in fact true, certainly we don't
want to rest the existence of semantics on this fact.  It is possible
to imagine a world in which nature at its most fundamental *isn't*
analog (it may very well be this world), and yet the beings in it
still possess semantics.  

Second, even if the real world *is* analog, it can be "mirrored"
(to use a neutral term) to *any arbitrary degree of accuracy*
in a virtual system (down to the quantum level).  That is to say,
even though the virtual world may be in fact digital, its resolution
can in principle be far below that of our transducers.  It will *appear*
to us to be the same as the real world.  Presumably any human infant
brought up in a detailed enough virtual world would act *exactly*
the same as a human infant brought up in the world being mirrored.
I find it very hard to believe that we would say the latter possesses
semantics, but not the former.  Similarly, we can imagine
SHRDLU brought up in this virtual environment, and SHRDLU's
robotic equivalent brought up in the real world.  Indeed, we can
use the same SHRDLU program, in the one case attaching the inputs
and outputs to the virtual system, and in the other case to the
real world.  It is unclear to me how one can claim that the latter
will have semantics, but not the former. 
 
>Searle showed that the symbols-only form of the Turing Test (TT),
>calling for indistinguishability in verbal capacity, was not strong
>enough. I have advocated in its stead what I dubbed the "Total
>Turing Test" (TTT), calling for indistinguishability in both verbal and
>robotic capacity (I am still enough of a functionalist to believe that
>the still stronger "TTTT," calling for neuromolecular
>indistinguishability, is supererogatory and that all TTT-equivalent
>robots are grounded).
>
>The meanings of the symbols in a pure symbol system (whether it is
>interpretable as a furnace, a plane, a solar system, a nervous system,
>a world, or all of the foregoing together) are ungrounded. Symbol
>systems just have syntactic (formal) properties that are systematically
>interpretable (by us) as meaning what they can be interpreted as
>meaning; their semantics is EXTERNAL to the symbol system, projected
>onto it by us. They don't mean anything TO the symbol system, any more
>than the semantics of a book mean anything to the book, because there's
>nobody home in there!

Agreed.

>Now the symbols in a grounded TTT-scale robot are not only 
>systematically interpretable by us, but those interpretions
>also square systematically with the robot's TTT-performance in
>the real world of objects that the symbols are about. That connection
>is NOT merely mediated by our interpretations. It is independently
>grounded. Therefore there is no more reason to doubt that there is
>someone home in a TTT-scale robot than that there is someone at home
>in any of the rest of us (since we are all TTT-indistinguishable).
>
>But take a step back and make the "world of objects" merely a symbolic
>simulation of the world instead of the real world and your grounding is
>immediately lost, and you are back to the symbols-only TT and mediated
>meaning (and hence PLENTY of reason to doubt there's anybody home in
>there).

This is the step that I would like to see clarified, since it seems to
me that it demands that humans raised in virtual realities would
have no semantics.  If this is *not* the case, then to assert that
SHRDLU would not have semantics in the same situation is to assert that
there is something different between SHRDLU and the human, which I take
it you would not want to do. 

>So, I repeat, a "virtual world" is not good enough (and the fact
>that the virtual world could drive a video that could fool our
>(REAL) senses is irrelevant -- the Turing Test concerns whether
>the robot is distinguishablle from us, not whether a simulation
>of the world is distinguishable to us from the real world).

As you said above, Searle noted that a purely verbal Turing Test was not
sufficient to demonstrate semantics.  What I am questioning here is
whether symbol grounding in the real world is also sufficient.  If not,
then the (robotic) Total Turing Test also is insufficient.

>The real culprit here, the one that allows people to get hopelessly
>lost in what I've dubbed the "Hermeneutic Hall of Mirrors" which is
>created by overinterpreting virtual systems, is the fact that THINKING,
>unlike flying, heating and moving, is unobservable, so it's not as
>obvious as it ought to be that there's no more thinking going on in a
>simulated nervous system than there is moving going on in a simulated
>solar system. The computational equivalence is simply guaranteeing the
>systematic correspondence between the real properties and the virtual
>ones.

I understand the above, and I concur.  I agree that a program which
simulated a mind wouldn't necessarily be one, and that a simulated
world is not the same as the real one.  However, the position that
I seem to be driven to by pondering SHRDLU's Dilemma is questioning
how grounding symbols in the "real world" is fundamentally different
from connecting them to an indistinguishable copy of that world.
Unless semantics arises in the former case due to some sort of 
ontological priority of the "real world", I see no principled way
to distinguish between them.


To clarify and summarize my position:

1. I agree that syntax alone will not yield semantics.  Semantics must
   be attached to the symbols in some manner.

2. I assert that a virtual reality can be made as indistinguishable
   from the real world as one would like (more importantly, that 
   it can be made accurate well below the detectable level of
   our transducers, or the transducers of any simulation of us).

3. Given 2., I assert that a human put into an accurate enough
   rendering of the real world by a virtual environment would 
   have *exactly* the same experiences as a human in the real
   environment.  Specifically, such a human would possess semantics.
   (This in and of itself may be enough to question the necessity of
   grounding symbols in the actual physical world for the production
   of semantics.)

4. A program in the situation in 3. would not possess semantics, since
   it is purely a syntactic system in contact with a simulated
   (not real) world (this follows from 1.). 
   I take it that you agree with me here. 

The conclusion that I draw from the above is:

5. Statements 3. and 4. taken together seem to indicate that there
   is a principled difference in the way that humans and programs could
   obtain semantics.  Specifically, it points to a situation in which
   what is sufficient for humans is not sufficient for a program.  The
   grounding of symbols in the physical world, while not sufficient
   for a program, is sufficient for a human.  Therefore, symbol
   grounding in the physical world cannot be the cause of the     
   meaningfulness of symbols. 

I welcome your comments.

- michael




