From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!agate!spool.mu.edu!think.com!sdd.hp.com!uakari.primate.wisc.edu!unmvax!constellation!hardy.math.okstate.edu!gindrup Mon May 25 14:06:54 EDT 1992
Article 5817 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!agate!spool.mu.edu!think.com!sdd.hp.com!uakari.primate.wisc.edu!unmvax!constellation!hardy.math.okstate.edu!gindrup
>From: gindrup@math.okstate.edu (Eric `'d'kidd' G..)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Keywords: symbol, analog, Turing Test, robotics
Message-ID: <1992May20.152747.14335@math.okstate.edu>
Date: 20 May 92 15:27:47 GMT
Article-I.D.: math.1992May20.152747.14335
References: <1992May19.003821.9450@Princeton.EDU>
Organization: Oklahoma State University, Math Department
Lines: 70

In article <1992May19.003821.9450@Princeton.EDU> harnad@shine.Princeton.EDU (Stevan Harnad) writes:
>
>The meanings of the symbols in a pure symbol system (whether it is
>interpretable as a furnace, a plane, a solar system, a nervous system,
>a world, or all of the foregoing together) are ungrounded. Symbol
>systems just have syntactic (formal) properties that are systematically
>interpretable (by us) as meaning what they can be interpreted as
>meaning; their semantics is EXTERNAL to the symbol system, projected
>onto it by us. They don't mean anything TO the symbol system, any more
>than the semantics of a book mean anything to the book, because there's
>nobody home in there!

Slightly tangent to the discussion, but this would seem to indicate that
if humans are TTT-positive, then they cannot be "brains in vats."  For to 
be so would imply that a non-analog representation of the universe held
sufficient semantic content to allow development of the cognitive ability
in question.  However, I believe (IMHO) that the question of determining
whether I am a brain in a vat or not has been sufficiently covered by others
to the point that I am certain that I cannot make the determination.
Therefore, since I am certain that I am TTT-positive, either TTT is not
sufficient for the detection of "thinking" or I am NOT a brain in a vat.  
Since I know the latter is indeterminable...

>But take a step back and make the "world of objects" merely a symbolic
>simulation of the world instead of the real world and your grounding is
>immediately lost, and you are back to the symbols-only TT and mediated
>meaning (and hence PLENTY of reason to doubt there's anybody home in
>there).
>
>So, I repeat, a "virtual world" is not good enough (and the fact
>that the virtual world could drive a video that could fool our
>(REAL) senses is irrelevant -- the Turing Test concerns whether
>the robot is distinguishablle from us, not whether a simulation
>of the world is distinguishable to us from the real world).

If people are just well programmed robots, and if a person were subjected
to a sufficiently accurate model of the "world" then the person should
develop in the same ways as any person subjected to the "real" world.
The simulation would be indistinguishable from the reality.  So, how
could the person distinguish?  (Indistinguishable in the sense that the
person's sensing apapratus are insufficient to detect the variations,
if they exist.  Since in principle an analog computer could be used, the
"digitality" of the simulation could be avoided.  Would the grounding
then be as hopeless?  I would guess that to maintain consistency you
would say that it is still hopeless.  Hmm...)

>The real culprit here, the one that allows people to get hopelessly
>lost in what I've dubbed the "Hermeneutic Hall of Mirrors" which is
>created by overinterpreting virtual systems, is the fact that THINKING,
>unlike flying, heating and moving, is unobservable, so it's not as
>obvious as it ought to be that there's no more thinking going on in a
>simulated nervous system than there is moving going on in a simulated
>solar system. The computational equivalence is simply guaranteeing the
>systematic correspondence between the real properties and the virtual
>ones.

Exactly.  Thinking is (just) the movement of information in some
(unspecified) way.  That is all a simulation of an event is : the
action of some information (a program) on some other information
(the data).  Although the solar system contains more than information,
all that is necessary to make an indistinguishable to people simulation
is some information.  (Quite a lot, really...)  But, as you said in the
post, the information which the people and robots glean about the system
is imposed on it, not implicit in it.  Therefore, the actual solar
system isn't necessary for anything to grasp, abstract, and understand
the solar system.  Any system which yields to the same symbolic
imposition should be sufficient.
And yet, I think you would say that it isn't, and I'm not quite sure why...

-- Eric Gindrup ! gindrup@math.okstate.edu


