From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!rutgers!princeton!shine.Princeton.EDU!harnad Mon May 25 14:06:07 EDT 1992
Article 5731 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!rutgers!princeton!shine.Princeton.EDU!harnad
>From: harnad@shine.Princeton.EDU (Stevan Harnad)
Newsgroups: comp.ai.philosophy
Subject: Grounding: Real vs. Virtual (formerly "on meaning")
Summary: Only real analog grounding will escape the symbol circle
Keywords: symbol, analog, Turing Test, robotics
Message-ID: <1992May19.003821.9450@Princeton.EDU>
Date: 19 May 92 00:38:21 GMT
Sender: news@Princeton.EDU (USENET News System)
Organization: Princeton University
Lines: 102
Originator: news@ernie.Princeton.EDU
Nntp-Posting-Host: shine.princeton.edu

In article <1992May18.200313.23575@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>it seems to me
>that your position is subject to what can be termed "SHRDLU's Dilemma".
>The problem is this:  a program can be constructed to interact with
>an artificial reality (just like SHRDLU's geometric world), which, since
>it is fully contained within a computer, is actually just a set of
>arbitrary marks with arbitrary connections.  Such an artificial reality
>has, presumably, as much semantics (namely, none) as the system you describe 
>above.  But, of course, an artificial reality can mirror the "real" world
>to any level of detail you choose, at least in principle.  For example, the
>"objects" that SHRDLU "manipulates" could be created in its artificial
>world through the combination of "atoms" plus basic rules governing
>atomic interactions.  Yet, such a simulated world, even if it was simulated
>down to the atomic structure, would still, at least, as I understand your
>position (and those others who demand "real world" grounding) be insufficient
>to attach semantics onto the symbols being manipulated.   This would be true
>*even if a human would not be able to distinguish the artificial world
>from the real one.* 

All you have to remember is one thing to keep all intuitions straight
on these questions:

A computer simulation of an analog object or state is not the same
as that object/state despite the fact that it is computationally
equivalent to it: A simulated plane does not really fly, a simulated
furnace does not really burn, there is no real motion in a simulated
solar system; by the same token, there is no real thinking in a 
simulated nervous system. Computational equivalence is not the same
as identity.

The world of objects is analog; substantial parts of the nervous
system (and its tranducers/effectors necessarily) are analog.
Computer simulations of them can help us to understand,
predict and explain both the world of objects and the nervous
system, but not by BEING the world or the brain, just by being
a formal model of them.

Searle showed that the symbols-only form of the Turing Test (TT),
calling for indistinguishability in verbal capacity, was not strong
enough. I have advocated in its stead what I dubbed the "Total
Turing Test" (TTT), calling for indistinguishability in both verbal and
robotic capacity (I am still enough of a functionalist to believe that
the still stronger "TTTT," calling for neuromolecular
indistinguishability, is supererogatory and that all TTT-equivalent
robots are grounded).

The meanings of the symbols in a pure symbol system (whether it is
interpretable as a furnace, a plane, a solar system, a nervous system,
a world, or all of the foregoing together) are ungrounded. Symbol
systems just have syntactic (formal) properties that are systematically
interpretable (by us) as meaning what they can be interpreted as
meaning; their semantics is EXTERNAL to the symbol system, projected
onto it by us. They don't mean anything TO the symbol system, any more
than the semantics of a book mean anything to the book, because there's
nobody home in there!

Now the symbols in a grounded TTT-scale robot are not only 
systematically interpretable by us, but those interpretions
also square systematically with the robot's TTT-performance in
the real world of objects that the symbols are about. That connection
is NOT merely mediated by our interpretations. It is independently
grounded. Therefore there is no more reason to doubt that there is
someone home in a TTT-scale robot than that there is someone at home
in any of the rest of us (since we are all TTT-indistinguishable).

But take a step back and make the "world of objects" merely a symbolic
simulation of the world instead of the real world and your grounding is
immediately lost, and you are back to the symbols-only TT and mediated
meaning (and hence PLENTY of reason to doubt there's anybody home in
there).

So, I repeat, a "virtual world" is not good enough (and the fact
that the virtual world could drive a video that could fool our
(REAL) senses is irrelevant -- the Turing Test concerns whether
the robot is distinguishablle from us, not whether a simulation
of the world is distinguishable to us from the real world).

The real culprit here, the one that allows people to get hopelessly
lost in what I've dubbed the "Hermeneutic Hall of Mirrors" which is
created by overinterpreting virtual systems, is the fact that THINKING,
unlike flying, heating and moving, is unobservable, so it's not as
obvious as it ought to be that there's no more thinking going on in a
simulated nervous system than there is moving going on in a simulated
solar system. The computational equivalence is simply guaranteeing the
systematic correspondence between the real properties and the virtual
ones.

For the details, please look at the references I posted in my prior
message.
Newsgroups: comp.ai.philosophy
Subject: to come
Summary: 
Followup-To: 
Distribution: world
Organization: Princeton University
Keywords: 

-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
harnad@learning.siemens.com / harnad@elbereth.rutgers.edu / (609)-921-7771


