From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!sun-barr!rutgers!mcnc!aurs01!throop Mon May 25 14:06:13 EDT 1992
Article 5741 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!sun-barr!rutgers!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual
Summary: Why and how does only real analog grounding will escape the symbol circle?
Message-ID: <60703@aurs01.UUCP>
Date: 19 May 92 15:49:32 GMT
References: <1992May19.003821.9450@Princeton.EDU>
Sender: news@aurs01.UUCP
Lines: 42

> harnad@shine.Princeton.EDU (Stevan Harnad)
>> michael@psych.toronto.edu (Michael Gemar)
>>it seems to me
>>that your position is subject to what can be termed "SHRDLU's Dilemma".
> A computer simulation of an analog object or state is not the same
> as that object/state despite the fact that it is computationally
> equivalent to it  [...]
> I repeat, a "virtual world" is not good enough (and the fact
> that the virtual world could drive a video that could fool our
> (REAL) senses is irrelevant -- the Turing Test concerns whether
> the robot is distinguishablle from us, not whether a simulation
> of the world is distinguishable to us from the real world).

I still don't understand how this addresses SHRDLU's Dilemma.  
(I guess I'm still lost in the "Hermeneutic Hall of Mirrors")

The point I still see is that this notion of "grounding" leads to a
situation in which a robot "with semantics" is NOT (in itself)
distinguishable  from one without.  (Or a human "with semantics" vs one
without, for that matter.)

Consider a robot interacting and demonstrating competence against a
virtual world, and another robot interacting and demonstrating
competence against the real world.  The two robots will (by hypothesis)
end up in identical physical states, yet one "has semantics" and the
other doesn't.

Further, consider taking the robot that was interacting with the
virtual world and have it interact with the real world.  Does it
suddenly have semantics, despite the fact that nothing about the robot
changed?  Does the reverse obtain, and the semantics suddenly dissapear
from the other robot if it is interacting with the virtual world?  Does
the supposition that the robot learned it's competence one way or the
other affect the answers to these questions, and why?

For these reasons, it seems a very, very peculiar thing to say that
"having semantics" in this "grounded in reality" sense is a property of
the robot (or of a human).  Or to put it another way, "having
semantics" in this sense seems an extremely uninteresting property
for an entity to have or lack.

Wayne Throop       ...!mcnc!aurgate!throop


