From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Mon May 25 14:06:31 EDT 1992
Article 5774 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <1992May20.173113.26605@gpu.utcs.utoronto.ca>
Keywords: transduction, analog
Organization: UTCS Public Access
References: <1992May20.034459.8223@Princeton.EDU>
Date: Wed, 20 May 1992 17:31:13 GMT

In article <1992May20.034459.8223@Princeton.EDU> harnad@shine.Princeton.EDU (Stevan Harnad) writes:
>In article <60703@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>
>>Consider a robot interacting and demonstrating competence against a
>>virtual world, and another robot interacting and demonstrating
>>competence against the real world.  The two robots will (by hypothesis)
>>end up in identical physical states, yet one "has semantics" and the
>>other doesn't.
>
>Just three important points to keep in mind and then the point of
>grounding and the TTT is easily seen: (1) Transduction must be real,
>(2) it is part of of the robot's internal functioning, and (3) how much
>of the rest of the robot's internal functioning is likewise analog
>rather than computational is moot (and certainly cannot be presupposed
>without begging the question).
>
>Now:
>
>A real person whose real senses interact with computer-generated
>sensory input rather than real-world input is still grounded (because
>people's brains are grounded and the person could just as well pass
>the TTT with natural or synthetic sensory input).
>
I am a bit confused here. I have thought that you claimed that grounding
of symbols came from interaction with real-world input. Now you seem to be
claiming that grounding is internal to the human brain (or TTT-passing robot's
'brain'). Much of the recent discussion about exposing a human or a robot
to real world and to a virtual world seems to indicate that others also
understood that you were claiming the real world input to be crucial for symbol
grounding.
 
>Exactly the same is true of a real TTT-capable robot in the same
>situation(s). The grounding still comes from its REAL TTT-passing
>capacity, not from the source of its sensory stimulation (but the
>sensory stimulation must be real stimulation, i.e., real transduction).
>
So the grounding is due to hardware/software of the brain, human or TTT-passing
robot's (or at least cpapcity for grounding)?

>A computer, on the other hand, subdivided into a part that simulates
>a robot symbolically and a part that simulates the world symbolically
>is NOT grounded because it is not doing real transduction and could
>not pass the TTT. 

Is then 'doing real transduction' a source of grounding? Now I do not see how,
for the part of the computer you talk about (the one that simulates a robot
symbolically), would it matter where does the world knowledge (and response)
come from. The signals from the real world, after they leave tranducers (eyes,
ears etc) travel to the brain as some sort of symbols and does it matter for
the functioning of the brain how are they generated? It seems to me totally
irrelevant whether they were generated by real world event or by its simulation
(as long as its sufficiently accurate). What would the brain know (or care)
about it?  For the robot simulation it would not make any difference if
it interacted with the real world and not its simualtion and it could very well
pass a TTT in it's simulated world (a simulated TTT of course).
    I also think you are putting too much stake in the statements like: 
simulation of the fire is not a real fire (it does not burn) etc. If simulation
is accurate enough and its output is fed to the brain, either through some
type of a 'virtual reality' setup or better directly, then it becomes as real
as necessary (for this brain).  Any real-world event manifests itself to the 
brain through electrical (or perhaps magnetic and chemical too) coming from
the senses and if a simulation of the event is good enough to produce the same
signals (meaningless symbols for some) then I do not see how will the brain
distinguish between a real event and its simulation. I am assuming a simulation
of tranducers too, but a futuristic virtual reality setup could perhaps be
used instead (although I am not sure if it could reproduce a sensation of 
a runny nose, important to some people). Some people might of course claim that
a brain receives info from the real world in some other way (extra-sensory
perception?), but if so, this should be stated explicitly and would lead to 
a different discussion (this remark is not directly related to your statements).

>.................This is true even if the simulation contains
>all the necessary information out of which we could implement
>the requisite transducers and build the real robot that really would
>pass the TTT: That robot (once built) would be grounded, but the
>simulation on which it was based in every nontrivial respect was not
>grounded -- despite being a simulation that could correctly
>second-guess its every move in response to any real-world contingency
>second-guessed in its world-simulation!
>
>This should be no more difficult to understand than the fact
>that real flying, like real TTT-passing, requires "transducers"
>to deal with the air, etc., and even if a flight simulation
>were so complete that it contained every piece of nontrivial
>information needed to design and build a plane that actually
>flew, the simulation does not fly.

It would fly in its simulated world! Demanding that a simulated plane flies in
a real world seems unreasonable to me. And if we hooked up our brain to this
simulated world (in a way discussed above) its flying would be as real to us
as that of a real plane would be if we were hooked up to a real world. 
>-- 
>Stevan Harnad  Department of Psychology  Princeton University
>harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
>harnad@learning.siemens.com / harnad@elbereth.rutgers.edu / (609)-921-7771


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


