From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!jvnc.net!princeton!phoenix.Princeton.EDU!harnad Sun May 31 19:04:04 EDT 1992
Article 5898 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!jvnc.net!princeton!phoenix.Princeton.EDU!harnad
>From: harnad@phoenix.Princeton.EDU (Stevan Harnad)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <1992May25.214006.29965@Princeton.EDU>
Date: 25 May 92 21:40:06 GMT
Sender: news@Princeton.EDU (USENET News System)
Organization: Princeton University
Lines: 112
Originator: news@ernie.Princeton.EDU
Nntp-Posting-Host: phoenix.princeton.edu

There still seems to be some confusion about the critical role of
transduction in symbol grounding, and about how it is that a robot with
TTT-capacity could be grounded if all its [REAL] transducers ever
received as input were computer generated stimulation. I hope this
analogy will help:

The key is in the notion of CAPACITY: Suppose we had problems telling
apart virtual flying (by computer-simulated planes, in
computer-simulated worlds) and real flying; and suppose we agreed
(quite correctly) that any candidate that can actually get aloft in
real air indistinguishably from the way real airplanes do is really
flying. (Notice that this is not verificationism: It is not that flying
capacity "means" my being able to observe it, or even its actually
being exhibited. I think it is unproblematic that having flying
capacity is having whatever it takes to be able do what I just said.
And flying is using that capacity.)

Now, suppose someone built a real airplane, with real flying capacity,
but he never bothered to fly it. No problem yet, right? Now suppose he
built a simulated environment for that real plane, a computer-governed
wind-tunnel that would produce all the requisite resistance, etc.
(I don't know enough aerodynamics to make this realistic, but please
fill in the details if necessary) to perfectly simulate everything a
plane comes up against in getting and staying aloft. Suppose the plane
was put through all of its paces in that simulated environment, as if
it were going through a full transatlantic flight. Notice that the
plane is still a real plane and that the flying capacity it is using is
still real flying capacity. The set-up has has NOT turned the candidate
into a simulated, computational airplane; its "transducers" (just about
all of the plane) are still real, etc.

Now here is the analogy with the TTT-robot: If the robot REALLY has the
capacity to pass the TTT, that capacity is not lost if it never gets to
use it, or if it uses it only in a simulated environment. Nor
does such a robot, in a simulated environment, turn into just a
computer ("in a vat") in a virtual world. Its TTT capacity (like the
plane's) is not only intact, but actually being used even when its
senses are stimulated by computer-generated input (just as yours is,
when you play a video game).

The only way to get confused here is to make the mistake that when the
input to the candidate is computer-generated rather than real, then the
candidate may as well just be a computer too! The reality of the
transduction should be a partial reminder that this is not all there is
to it; the plane analogy should help too. And last, as I keep saying,
it is no more justified in the case of the TTT robot than in the case
of the real plane to imagine that the "transduction" is just a trivial
interface to a "core" that is just computing: The innards of the robot
could very well be mostly analog transduction all the way through, as
in the case of the plane. (And "analog" does not just mean continuous
as opposed to discrete, but physical, as in the case of an airplane, as
opposed to symbolic, as in the case of computations that are merely
interpretable as if they were an airplane.)

As I keep stressing, thinking, unlike flying, is unobservable (except
by the one doing the thinking). Hence it is a HYPOTHESIS that a
TTT-scale robot really thinks; it is not a definition, as in the case
of the plane, which can really fly if it can really do that observable
thing we label "flying." It is possible that the robot with TTT
capacity does not think. What is not possible is that the TTT robot
that DOES have TTT capacity DOESN'T have TTT capacity; hence the TTT
robot, like the real plane, continues to have (and use) TTT capacity in
the simulated environment.

I think it is an uninteresting terminological question what we want to
say a real plane is DOING when it is performing in the wind tunnel, but
that is where the analogy between thinking and flying breaks down:

Neither the real plane in the wind tunnel nor the TTT-robot in the
simulated environment is actually passing its respective Turing Test,
for this calls for the real world. Yet if we know that each DOES have
its requisite capacity, NOTHING INTERESTING rides on the observation
that in the simulated environment they are not really either flying or
passing the TTT (on which thinking piggy-backs, by hypothesis). That
kind of nonreality is not a problem, given that the real capacity is
there, and being used. So, fine, the plane is not really flying and you
are not really looking at objects in a (totally) simulated environment.
Unlike flying, which is an external thing, I see no reason whatever to
doubt that [if the hypothesis that thinking does piggy-back on
TTT-capacity and its execution is correct] the robot, like you,
continues to be really thinking in the simulated environment.

One last point, related to the error of thinking that transduction is
either trivial, or dispensable, or just input to a homuncular "core"
("in a vat") that is just computational: The extra "T" in TTT means
Total, and it is this T that underlies all Turing-style criteria: There
are arbitrarily many ways to do PARTS of what a person can do (hence
the endless stream of "toy" models), so the only way to get the degrees
of freedom down to normal empirical size is to scale up to Total
capacity. (In the case of the plane, for example, if a candidate could
stay aloft only for a while, it would just be jumping or falling, not
flying.) It is for this reason that our goal is not to build a
TTT-robot that is Turing indistinguishable from a real person who is
deaf, dumb, aphasic, and paralyzed, even though such people surely
have all the capacities that are necessary for thinking (and indeed
think). For methodological reasons, we have to be sure we've captured
the Total capacity first, and then worry about how much we can scale
back while still preserving thinking.

One scaled down candidate, however, can already be rejected as not
having the requisite capacity, and that is a purely computational one,
even TT-scale, waiting only to be hooked up to some trivial transducers
so it can DEMONSTRATE its capacity. That system would have TTT capacity
in about the same sense that a single cell might, if only it were
connected to the rest of the brain and body; or, to use an analogy
closer to home, only in the sense that a computer would have
computational capacity, if only it were plugged in.

-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
harnad@learning.siemens.com / harnad@elbereth.rutgers.edu / (609)-921-7771


