From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!usc!rpi!psinntp!psinntp!dg-rtp!sheol!throopw Mon Nov  9 09:36:47 EST 1992
Article 7514 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!usc!rpi!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <720937346@sheol.UUCP>
Date: 05 Nov 92 01:29:22 GMT
References: <markrose.720385670@spssig> <720582638@sheol.UUCP> <1992Nov3.181425.8089@spss.com>
Lines: 189

: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Nov3.181425.8089@spss.com>
: It would be interesting to know if the sensory and motor areas of the brain
: are really dormant during REM sleep.  Any neurologists out there?

Hmmmm?  It is pretty certain that they are NOT dormant, though I get
the impression Mark thinks I thought they *were* dormant.  For one
example, the common feeling of being unable to move in dreams and
near-dreaming states may well be due to the fact that the body is in
fact paralized during REM sleep.  This general paralysis doesn't seem
to affect the eyes as fully, hence the "rapid eye movements" that
result from you paying attention to some internally generated visual
scene.  Conjecturally (and even in some limited and unpleasant tests
I've read about in cats), the "twitching dog feet" syndrome really is
an analog to human REM sleep, the "rapid paw movements" or RPM sleep of
the dog really corresponding to dream running.

( There may be other, better explanations for the various phenomena
  associated with REM sleep... if so, I'd appreciate hearing about it.
  The above is based on vague memories from various Science News and
  Scientific American articles on the subject over the last ten years. )

So what's the point?  The point is, the brain is processing away using
the sensual and motor processing centers, despite being in nature's
sensory deprivation tank.

Similarly, I have read of tests with visual memory, which find that a
common strategy in a task such as "how many trees were there in that
scene we showed you a while ago" was to close the eyes, imagine the
scene, and mentally count off the trees.  PET scans (as I recall)
confirm that visual processing is really going on during this
activity.  When people reason about geometrical figures, and (say)
transform them to do the "which of these is a rotated version of the
template" questions on IQ tests, their visual cortex gets really busy.
And so on.

So what's the point this time?  Well, *I* take these tenative
indications to point to static grounding (as oppoesed to the ability to
stay grounded, or "dynamic grounding") being something quite apart from
the senses, and so I see no reason why a computer can't have it just as
well and just as fully as a robot.  Dreams and even some forms of memory
and thought seem to be an excersize in static grounding.

: But Hawking isn't a man in a sensory deprivation tank.  He retains
: his vision, hearing, and I don't know what other senses; it's his motor
: ability that's shot.  So his case has nothing to tell us about the
: sanity of someone deprived of all sensorimotor capacity.

This amounts to the fact that Hawking's sensory deprivation isn't
total.  But a computer's deprivation isn't total either.  Consider an
analog of a pre-grounded computer conversing on a teletype.  Say, a
person in a SDT, except with the ability to twitch one finger to send
morse code and feel morse-encoded vibrations applied to some point on
the skin.  I see no fundamental reason why that person cannot remain
sane, and further, remain grounded in the use of the morse signals.
That is, I'd claim that this person is plausibly both statically and
dynamically grounded, though clearly this form of dynamic grounding
will NOT ground new experiences AS WELL (as richly) as the statically
grounded material.

I think there's also a prejudice of "symbolic communication" as being
in some way low-bandwidth, sort of stream-of-characters-at-reading-rate
or so bandwidth.  But a computer could snarf a GIF-encoded image of a
scene over a lan or internet, and (with enough CPU horsepower, perhaps
parallel) run it through its notional "visual cortex".  At bandwidths
approaching human visual bandwidth, let's say.  Why would that not
ground the computer in that scene every bit as well as a human viewing
the scene?

( OK, the GIF in today's technology would have lower resolution,
  a narrower field of view, would be a still instead of "live", and
  so on and on.  But I view these differences as peripheral issues.
  The central issue is, why wouldn't that process (to the computer)
  be the same as the live viewing (to a human), assuming comparable
  field of view, framing rate, resolution, and so on and on.  I'm sure
  folks feel poorly grounded in events you see on TV, but if it were
  ultra-HDTV with 120 degree field of view at several zillion dpi
  in realtime?... )

: Wouldn't a human or an AI designed for interaction with the real
: world experience frustration and distress when deprived of that
: interaction?

I don't know.  Certainly, when I try to empathize, I think *I'd* feel
frustration and distress faced with even Hawking's situation, let alone
my analog to a computer with only a tty attached. 

But remember, I'm not suggesting it's a good thing to subject an AI
designed for a rich sensory environment to an impoverished one.  I'm
merely trying to understand why people are claliming that an
impoverished sensory cabability necessarily imples an ungrounded state,
whether statically or dynamically.  For this inquiry, hypothetical
frustration and distress (as the Borg say) are irrelevant.

: [...] picture a computer which has no direct experience with the world,
: and indeed no very great intelligence, but which does have a telephone with
: which it can call up a human being, who will answer any questions it
: may have about the world.  It has access to a grounded system; does that
: make it grounded?  I don't think so; not that in itself.

I don't see what is missing.  What more is needed?

( Of course, note that to even use this phone and engage in a
  conversation, the computer must be pre-grounded to a pretty
  extensive degree. )

:: I'm still most attracted to the "grounding is as grounding
:: does" sort of definition.  That is, a symbol system is grounded if the
:: physical system that realizes it reliably organizes its symbols in ways
:: that correspond to "the world".  How it does this is ITs business,
:: and irrelevant to the question of whether it IS grounded.
: I see problems with this proposal too.  How much "correspondance" is needed?
: How much detail is necessary; how many errors can be tolerated?  How broad
: must the correspondance be?  (E.g., is a system that knows everything about
: blocks and nothing about anything else grounded?)  Does it really not matter
: if there's no causal relationship between the real world and the internal
: model?

( Imagine me making shooing motions with my hands, as I say... )

Details, mere details.

But to try to respond:

        How much "correspondance" is needed?
        How much detail is necessary; how many errors can be tolerated?

This is, of course, the heart of the matter.  The only real answer I
have is "about as much as a typical human doing something closely
analogous, about as many as humans make in similar situations".   
( How closely analogous?  How typical a human.....
  Imagine me resuming my shooing motions at this point. )

      (E.g., is a system that knows everything about
       blocks and nothing about anything else grounded?)

No (or "not well"), because the symbols wouldn't have as many
interrelationships with other symbols refering to the world as those
employed by a "typical" human.  Which also, I think, responds to the
general question of "How broad must the correspondance be?".

Note that this notion of using a human as a benchmark-of-convenience is
lifted straight from Harnad.  The only difference I know of being that
I'm talking about the properties of the symbol system and its
realization, and he's talking about the shape of the entity/environment
boundary also.  (Assuming for the moment that I *understand* Harnad's
position correctly, of course.)  It's basically going back to the
original Turning test, but with the formal "contest" style rules left
vague.

: Does it really not matter
: if there's no causal relationship between the real world and the 
: internal model?

Yes and no.  And maybe.

On tuesdays, thursdays and saturdays, I think that the causal
relationship is moot, and the "humongus lookup table", or even "massive
coincidence" implementations of this "correspondence" indicates
grounding.  On mondays, wednesdays and fridays, I think that, no, the
causal link is necessary, otherwise it isn't grounding.  On sundays, I
try to stop thinking about it so obsessively, because by that time I'm
usually getting quite dizzy.

I guess on balance, I like my monday position best (but what do you
expect... I'm writing this on wednesday).  But remember, the
realizations of symbols sent in and out of a computer ARE causally
connected to the world.  Examining these symbols, we infer that their
source is grounded, because the alternatives (of humongus table, or
massive coincidence, or whatever) are so unlikely in practice.

Which means, I suppose, that adding a clause saying that the
correspondence of the symbols with their referents and each other must
be due to causal relationships within the realization of the symbol
system, and between it and the world, would be just fine.  But I still
don't see how a computer lacks this "soley on account of" it's
low-bandwidth sense connection to the world.


On a side track: it occurs to me from the above, that running the
Turing Test with the human volunteers in a sensory deprivation tank
using morse, or perhaps a chording keyboard and a braille terminal
strip (or perhaps even a character stream laser-scanned onto their
retinas with no contextual cues) would eliminate most of the class of
probes brought out by somebody a while back, such as "describe the
lines in the palm of your hand in detail", and other such probes of the
prohibitive-to-simulate physical surround of the player.  Hmmmm.
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


