From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:07:13 EDT 1992
Article 5849 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Grounding: Virtual vs. Real
Organization: Department of Psychology, University of Toronto
References: <1992May21.045844.2833@Princeton.EDU>
Message-ID: <1992May22.182832.11176@psych.toronto.edu>
Date: Fri, 22 May 1992 18:28:32 GMT

In article <1992May21.045844.2833@Princeton.EDU> harnad@shine.Princeton.EDU (Stevan Harnad) writes:
>I'll try to answer some of the questions that have been raised
>about grounding, why real transduction is essential, etc.:

Your patience in laying out your position is most appreciated, since most
of what you have been presenting I gather is also in print.  

>(1) HUMAN TTT: The only grounding worth talking about is the grounding
>of the symbols inside a robot that is capable of passing the Total
>Turing Test (TTT), i.e., a robot capable of all the sensorimotor and
>verbal interactions with the real world of objects, events and states
>of affairs that we are capable of -- Turing-indistinguishable from us
>in that respect, as a matter of fact.

Here again you put in the requirement of interacting with real objects.  I
had gathered that this in fact is *not* crucial for TTT-capability.  Is
there some way to phrase the above definition so that interactions with
virtual environments, *even those which are not physically possible*, would
somehow be allowed (it is the emphasized phrase which I think requires
the re-definition I am discussing, since I suppose you could argue that
a TTT robot in a VR would still be *capable* of real world interactions).

>Note that the symbols of the robot are grounded in its TTT capacity;
>remove the TTT capacity and you remove the grounding.


>(2) DEDICATED SYSTEMS: So let's not talk any more about "grounded"
>type-writers and "grounded" computers. Neither type-writers nor
>computers can do what it takes to pass the TTT. They can't see things
>and they can't manipulate them. Only robots are relevant.

Again, you imply here that it is the seeing and manipulation of *real*
objects that is important, since SHRDLU and its ilk "see" and "manipulate"
*virtual* objects.

>(3) REAL TRANSDUCERS: Transduction is the transformation of energy
>from one form into another. The specific kind of transduction that
>is relevant to TTT robots is sensorimotor transduction: They must
>be capable of receiving the "shadows" of objects cast on their sensory
>surfaces -- and then taking it from there.
>
>(The question is begged if the assumption is made that transduction is
>trivial, just a thin surface that immediately goes into symbols:
>Everything inside a robot or ourselves could in reality be analog all
>the way through, or, as I believe, hybrid, with the symbolic part
>grounded bottom-up in the analog).

However, I thought you argued that the "analogicity" of computation is
a red herring.  Certainly I can imagine a purely discrete universe, with
purely discrete transducers.  Certainly in such a universe you would not
want to say that there could be no grounding.  The transformation of
*energy* from one form to another does not necessarily imply the 
transformation of an analog signal into a digital or symbolic one.

>But the minute you take away a robot's transduction capacity, you've
>removed it's grounding, because a robot without sensorimotor capacity
>cannot pass the TTT (hence it does not HAVE TTT capacity).

>(4) "BRAIN IN A VAT": This is the point where people usually start
>to think of brains in vats, but real brains in vats are not computers
>with their trivial peripherals detached: They are mostly analog
>re-projections of the sensory and motor surfaces.

The example can presumably be adjusted so that higher-order sensory processes
are also taken from the brain and given to the device powering the virtual
reality.  We can imagine that only the parts of the brain which are non-
analog are preserved.  As far as I can see, the example still runs through.
(Interestingly, this is beginning to mirror the "fading qualia" argument...)



> It must be left
>COMPLETELY moot what would be left of the insides of a TTT-capable
>robot if you removed its transducers. What DEFINITELY cannot be
>presupposed (without begging the question) is that there would just be
>a computer left in there -- and this is what most of the confusion
>about virtual grounding and virtual reality arises from,

Then I am left with a confusion about what you mean when you say "transducer".
If transducers are *not* simply the peripheral devices that convert 
outside stimuli into an energy form usable for later computation (and thus
easily removed from a TTT-capable robot, leaving the symbolic processor
intact), what are they?  Certainly once, for example, the light energy that
makes up the visual world has been converted into digital electrical pulses
for use in computation, no further conversion of energy is required.  Since
I thought that the job of transducers is *only* energy conversion, I do
not see how transduction could occur once the light energy has been
digitized.   

>(5) TTT CAPACITY. Note that the criterion has always been TTT capacity
>-- nothing about its causal history, nothing about the nature of the
>actual objects in the world. A grounded (TTT) robot must simply be
>capable of performing in the world in a way that is indistinguishable
>from the way people do.
>
>The human brain is grounded (because of whatever properties, so far
>unknown, that give it TTT-power):
> It clearly has the capacity to pass
>the TTT when attached to a musculoskeletal system.

Presumably by definition.

> Its sensorimotor
>"tranducers" are arguably a part of the peripheral nervous system, and
>the brain consists of the peripheral and central nervous system. I have
>no idea what a central nervous system without a peripheral nervous
>system can do, and I don't care, because we know too little about what
>EITHER of them actually does to put any substance in arguments based on
>brains in vats.

I disagree, and I think this response avoids the issue.  If human transducers
are indeed peripheral, and *not* possessed by the CNS (which seems to 
be contrary to what you implied in your "brain-in-a-vat" response above),
then it would be quite easy for us (in principle, at least) to remove
these transducers and replace the signals they send with artificially
created ones from a virtual reality.  I interpret your position as
arguing that, without the transduction that normal sense organs provide,
there would be no grounding.  But I have a very difficult time believing
that, when attached in this way to a virtual reality, a person would
not in fact have the experience of seeing and manipulating meaningful
objects, in other words, that they would possess semantics.

>So perhaps I should say the human BODY (brain and all) is grounded, to
>short-circuit further sci-fi fantasies.

As you probably know, science fiction examples are a staple in the investigation
of philosophical problems surrounding philosophy of mind, personal identity,
and even ethics.  In addition, it is not at all clear to me that such
examples as the ones being discussed here are all that far fetched, given
both advances in biotechnology and virtual realities.  I think that it
is unfair to place a moratorium on such speculation.  We all agree that
an embodied human has semantics.  What is at question is whether or not
any other entities could *also* possess it.  

> It's grounded whether what is
>stimulating the skin or the eyes is the shadow of a real object or just
>computer generated video and vibrators, and whether the hands are
>manipulating real objects or just joysticks connected to virtual
>objects. The human body is grounded if it can pass the real TTT; and
>even if from birth it's been exposed to nothing but artificial,
>computer-generated stimulation, it's still grounded IF IT CAN PASS THE
>TTT.
>
>The exact same thing is true of a grounded TTT robot: If it does have
>the CAPACITY to pass the TTT in the real world, it makes no difference
>if it's sat all its life in a lab getting artificial input (to its
>REAL transducers). The grounding comes from the CAPACITY, not
>its exercise or the objects on which it is exercised (and of course
>the TTT is just a TEST of that capacity -- let's not mix up the
>capacity itself with our empirical criteria for verifying it's
>there).

See my comments above about virtual realities that are not physically
possible.

>(6) HOW/WHY DOES TTT POWER GROUND SYMBOLS? Because in an ordinary,
>ungrounded symbol system, whether it is static, like a book, or
>dynamic, like a computer, symbols are manipulated only syntactically
>(based on their shapes), yet those symbols are systematically 
>interpretable as MEANING something, i.e., they have a semantics.
>But that semantics is ungrounded, because it depends on our
>interpretation. Independent of the interpretation we project on
>it, there are no "meanings" in a book or a computer, because books and
>computers are not the kinds of things that anything means anything to.
>
>The TTT puts a second set of constraints on the symbols in a symbol
>system, over and above the constraint that (i) they must be
>interpretable, by us, as being ABOUT something (which is what makes
>them a symbol system in the first place, rather than random gibberish).

If by "about" you mean "have reference", then I would disagree.  I 
believe we can have systems of symbols which do not admit of an
obvious semantic interpretation (which in the parenthetical
comment above you seem to imply cannot exist).  This is admittedly a minor
point, since the intent of the above is clear.

>That second constraint is that (ii) the system (now a robot) must be
>able to pick out (discriminate, categorize, manipulate, identify and
>describe) the objects that its symbols are interpretable as being about
>in a way that is coherent with the interpretation and indistinguishable
>from the way we do it: The meanings of its symbols are then grounded
>directly in its robotic capacity rather than just parasitic on the
>meanings we project unto them.

This constraint troubles me, as I do not think that it solves the
problem.  However, I honestly cannot formalize what is at best a vague
discomfort with it.  If I come up with an explication, I will pass it on
to you. 


Thanks again for the discussion.  It is most stimulating.

- michael



