From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:06:37 EDT 1992
Article 5785 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Grounding: Virtual vs. Real
Organization: Department of Psychology, University of Toronto
References: <1992May20.034459.8223@Princeton.EDU>
Message-ID: <1992May20.201742.15533@psych.toronto.edu>
Keywords: transduction, analog
Date: Wed, 20 May 1992 20:17:42 GMT

In article <1992May20.034459.8223@Princeton.EDU> harnad@shine.Princeton.EDU (Stevan Harnad) writes:
>In article <60703@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>
>>Consider a robot interacting and demonstrating competence against a
>>virtual world, and another robot interacting and demonstrating
>>competence against the real world.  The two robots will (by hypothesis)
>>end up in identical physical states, yet one "has semantics" and the
>>other doesn't.
>
>Just three important points to keep in mind and then the point of
>grounding and the TTT is easily seen: (1) Transduction must be real,

What is meant by "real"?  You obviously *don't* mean (contrary to my
interpretation of your position earlier) that it must involve the actual 
physical world.  Apparently it also need not involve "real" sensory
apparatus, since you accept that "synthetic sensory input" into a
person's brain would count.  To what, then, is the "real" referring, if
not the stimuli being transduced or the transducers themselves?  

It is possible, of course, that I have misread you, and that it *is*
necessary for signals to come from "real" (physical?) transducers in
order to have grounding.  I don't see how this is supportable, however,
since it argues that a brain in a vat, hooked up appropriately to a 
computer-generated world, would again *not* have semantics.  I have
a hard time believing this, in large part because I see no way of
proving that *I* am not in such a situation, and yet I *know* that
I have semantics (this is just a high-tech version of Descartes' evil
genius).

>(2) it is part of of the robot's internal functioning,

What this means, or is meant to capture, is unclear to me.  

> and (3) how much
>of the rest of the robot's internal functioning is likewise analog
>rather than computational is moot (and certainly cannot be presupposed
>without begging the question).

This I will agree with.

>Now:
>
>A real person whose real senses interact with computer-generated
>sensory input rather than real-world input is still grounded (because
>people's brains are grounded and the person could just as well pass
>the TTT with natural or synthetic sensory input).

What do you mean when you say that "people's brains are grounded"?  Is
this true even in the case where they are receiving stimulation that
bypasses usual sensory organs, but is instead fed directly into the brain
(the brain-in-a-vat scenario)?  If so, then "grounding" seems to be
a property of what brains are made of, since you agree that a program
in the same situation would *not* be grounded. 

>Exactly the same is true of a real TTT-capable robot in the same
>situation(s). The grounding still comes from its REAL TTT-passing
>capacity, not from the source of its sensory stimulation (but the
>sensory stimulation must be real stimulation, i.e., real transduction).

Why do you equate "real stimulation" with "real transduction"?  I can
imagine cases in which the stimulation does not involve transduction
(e.g., brains in vats).  It is not clear at all to me that these two
things are necessarily equivalent.

>A computer, on the other hand, subdivided into a part that simulates
>a robot symbolically and a part that simulates the world symbolically
>is NOT grounded

OK so far...


> because it is not doing real transduction and could
>not pass the TTT.

But this is where I get confused.  I simply do *not* see how transduction
can be a necessary condition for semantics, since a brain in a vat hooked
up to a virtual reality would *not* perform transduction, but would *still*
have semantics.  


> This is true even if the simulation contains
>all the necessary information out of which we could implement
>the requisite transducers and build the real robot that really would
>pass the TTT: That robot (once built) would be grounded, but the
>simulation on which it was based in every nontrivial respect was not
>grounded -- despite being a simulation that could correctly
>second-guess its every move in response to any real-world contingency
>second-guessed in its world-simulation!
>

>... real flying, like real TTT-passing, requires "transducers"
>to deal with the air, etc., and even if a flight simulation
>were so complete that it contained every piece of nontrivial
>information needed to design and build a plane that actually
>flew, the simulation does not fly.

I believe I understand the point you are trying to make here, but
I'm not sure that it is as germane to the discussion as it is
presented here.  A person experiencing a "simulated" world would
not "really" fly either, but they would have *experiences* which
would be *indistinguishable* from those of real flying (assuming a
sophisticated enough virtual reality).  The *experiences* would be
real, regardless of the reality of the world in which they are
experienced.  My understanding is that our disagreement is
about whether or not a program has the same (or any) experiences
*in exactly the same situation that we agree a human does*, namely,
in a "simulated" environment.

- michael


