From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!mips!decwrl!mcnc!aurs01!throop Mon May 25 14:06:38 EDT 1992
Article 5787 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!mips!decwrl!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Summary: I think I understand better now, but some problems remain
Keywords: transduction, analog
Message-ID: <60718@aurs01.UUCP>
Date: 20 May 92 19:52:41 GMT
References: <1992May20.034459.8223@Princeton.EDU>
Sender: news@aurs01.UUCP
Lines: 113

-> harnad@shine.Princeton.EDU (Stevan Harnad)
->> throop@aurs01.UUCP (Wayne Throop)

->>Consider a robot interacting and demonstrating competence against a
->>virtual world, and another robot interacting and demonstrating
->>competence against the real world.  The two robots will (by hypothesis)
->>end up in identical physical states, yet one "has semantics" and the
->>other doesn't.

-> Just three important points to keep in mind and then the point of
-> grounding and the TTT is easily seen: (1) Transduction must be real,
-> (2) it is part of of the robot's internal functioning, and (3) how much
-> of the rest of the robot's internal functioning is likewise analog
-> rather than computational is moot (and certainly cannot be presupposed
-> without begging the question).
-> [...]
-> A real person whose real senses interact with computer-generated
-> sensory input rather than real-world input is still grounded  [...]
-> Exactly the same is true of a real TTT-capable robot [...]
-> The grounding [...] comes from [...] REAL TTT-passing capacity, [...]

Good.  This corrects some of my misconceptions.

However, I still see two troublesome points.

First, why aren't the processes running on the computer on my desktop
fully grounded?  They follow the three important points above.  Not that
they can pass the TTT, but if the process were complicated enough, they
are grounded in the real word with real transducers that are part of
the computer's internal functioning.  These transducers are keyboard,
mouse, audible bell, and a few megapixels.  The only thing it seems to
lack is the "right program" and enough storage and speed.

Further, the Chinese Room itself seems fully grounded.  It accepts
inputs, "transduces" them (by having a human recognize "analog" shapes
and look them up in tables, and all this is real, and part of the
CRs internal functioning), and its output is generated and then sent
back to the real world.  And by hypothesis, it can pass the TTT as well
as a human with (say) only the capability of hearing and speech.

-> This should be no more difficult to understand than the fact
-> that real flying, like real TTT-passing, requires "transducers"
-> to deal with the air, etc.

But any implementation of a symbol-crunching process on an actual
computer IS already dealing with the real world, with the "air" if you
will.  It HAS transducers that turn keystrokes and mouse events into
internal formats, and effectors that turn internal symbols into
real-world pixels.  Why should substituting a camera, microphone,
arms, fingers, and pressure sensors for the keyboard, screen,
beeper and mouse  make any difference in principle at all?

Presumably I'm still missing something.  Perhaps I'm drawing the
boundaries in the "wrong" place, or whatnot.  I'm also not sure I
see why an entity that's entire existence is spent in a virtual-world-suit
is grounded in reality, except by fiat (that is, the three rules don't seem
to imply it, in that I can't see how to derive it from them; there is
a real transducer, and it is part of the internals of the person/robot,
but it is not connected to reality).

Second, there still seem to me to be oddities with SHRDLU-derived scenarios:

-> A computer, on the other hand, subdivided into a part that simulates
-> a robot symbolically and a part that simulates the world symbolically
-> is NOT grounded because it is not doing real transduction and could
-> not pass the TTT. This is true even if the simulation contains
-> all the necessary information out of which we could implement
-> the requisite transducers and build the real robot that really would
-> pass the TTT: That robot (once built) would be grounded, but the
-> simulation on which it was based in every nontrivial respect was not
-> grounded -- despite being a simulation that could correctly
-> second-guess its every move in response to any real-world contingency
-> second-guessed in its world-simulation!

So let's raise a robot-simulation until it has full (simulated)
competence, and can "second-guess" what a real robot would do in the
real world.  Then we can implement a real robot by building transducers
and adding them to the simulated robot.  (We could even be perverse,
and have the transducers produce the identical signals that the
simulated world produced, and do OCR (or SR) on the meaningless,
ungrounded symbols produced by the simulation, and drive the 
effectors indirectly.  As I understand it from above, all these
details are moot.)

We now have a robot with full TTT-passing capability, fully grounded.
Yet where did this capability come from?   I still find it... well,
again:  peculiar to say (in effect) that this capability came from the
aftermarket transducers and effectors, and that the symbolic processing
going on inside has so little to do with the now demonstrated "real
thinking" that is presumably going on.

Or to put it another way, the distinction between this robot "really
thinking" when wired up to its aftermarket transducers, and merely
emitting squiggles and squoggles when not  seems a very, very
uninteresting distinction to make.  Especially since it seems to make
so little difference whether the robot's novel was produced by a
directly connected printer, or typed by a robotic arm onto a
typewriter.  (I wonder if it makes a difference whether the typewriter
is electronic or mechanical...)

This also connects to the first point, in that I don't even see that
the robot in the virtual world deserves to be called "ungrounded" in
any significant sense.  After all, the simulation is causally connected
to the real world (in that the rules that drive it are derived from
observing and rule-ifying the real world).  (If the rules reflected the
real world by sheer happenstance instead of a causal connection (via
intent)... but that's a digression.)

Again, I agree that the "gounding" distinction CAN be consistently
made, but I don't see any motivation for doing so, and still less
for basing the notion of "real thought" on this distinction.

Wayne Throop       ...!mcnc!aurgate!throop


