From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!decwrl!netsys!pagesat!spssig.spss.com!markrose Tue Nov 24 10:51:01 EST 1992
Article 7554 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!decwrl!netsys!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov9.221842.18550@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <720582638@sheol.UUCP> <1992Nov3.181425.8089@spss.com> <720937346@sheol.UUCP>
Date: Mon, 9 Nov 1992 22:18:42 GMT
Lines: 158

In article <720937346@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>: From: markrose@spss.com (Mark Rosenfelder)
>: Message-ID: <1992Nov3.181425.8089@spss.com>
>: It would be interesting to know if the sensory and motor areas of the brain
>: are really dormant during REM sleep.  Any neurologists out there?
>
>Hmmmm?  It is pretty certain that they are NOT dormant, though I get
>the impression Mark thinks I thought they *were* dormant.  

Yes, I did; sorry if I misunderstood you.

>So what's the point?  The point is, the brain is processing away using
>the sensual and motor processing centers, despite being in nature's
>sensory deprivation tank.

I'd be more comfortable with this if we knew exactly what dreams *are*.
For instance, one theory is that they are the brain's attempt to make sense
of random neural activity.  If that were true-- if the brain when deprived 
of sensory input starts making up its own-- it doesn't support the idea 
that world-grounded systems do so well when removed from the world.
(But of course there are other theories of dreams.)

>Consider an
>analog of a pre-grounded computer conversing on a teletype.  Say, a
>person in a SDT, except with the ability to twitch one finger to send
>morse code and feel morse-encoded vibrations applied to some point on
>the skin.  I see no fundamental reason why that person cannot remain
>sane, and further, remain grounded in the use of the morse signals.
>That is, I'd claim that this person is plausibly both statically and
>dynamically grounded, though clearly this form of dynamic grounding
>will NOT ground new experiences AS WELL (as richly) as the statically
>grounded material.

And I see no guarantee that the guy *will* remain sane-- or grounded-- 
in circumstances so far removed from those evolution has fitted him for.
But I don't see any way to resolve this by argument alone; somebody's
just got to try it.

>I think there's also a prejudice of "symbolic communication" as being
>in some way low-bandwidth, sort of stream-of-characters-at-reading-rate
>or so bandwidth.  But a computer could snarf a GIF-encoded image of a
>scene over a lan or internet, and (with enough CPU horsepower, perhaps
>parallel) run it through its notional "visual cortex".  At bandwidths
>approaching human visual bandwidth, let's say.  Why would that not
>ground the computer in that scene every bit as well as a human viewing
>the scene?

Assuming it's already statically grounded, sure, it should be just as
grounded in the scene as a human viewing the same picture.  You have
yourself supplied further differences from a human actually on the scene:

>( OK, the GIF in today's technology would have lower resolution,
>  a narrower field of view, would be a still instead of "live", and
>  so on and on.  [...] )

>But remember, I'm not suggesting it's a good thing to subject an AI
>designed for a rich sensory environment to an impoverished one.  I'm
>merely trying to understand why people are claliming that an
>impoverished sensory cabability necessarily imples an ungrounded state,
>whether statically or dynamically.  For this inquiry, hypothetical
>frustration and distress (as the Borg say) are irrelevant.

Well, I *said* I was wandering.  Still, the original question was why
a computer with a robot's database but no robotic capacity might be 
less than grounded, and if it's dysfunctional in such a situation that
seems to me to be a good answer.

>: [...] picture a computer which has no direct experience with the world,
>: and indeed no very great intelligence, but which does have a telephone with
>: which it can call up a human being, who will answer any questions it
>: may have about the world.  It has access to a grounded system; does that
>: make it grounded?  I don't think so; not that in itself.
>
>I don't see what is missing.  What more is needed?
>
>( Of course, note that to even use this phone and engage in a
>  conversation, the computer must be pre-grounded to a pretty
>  extensive degree. )

No, it has no direct experience with the world at all.  The point is that
where a robot could use its own grounding, this hypothetical system borrows
that of a human.  Suppose the system is being Turing tested.  It's frequently
stumped for an answer; that's when it calls up the human, and with his help
works out passable answers.  I don't think that grounds the computer.

The extreme example is the teletype by which the "control human" in the 
Turing Test communicates with the judges.  Lo, TT-passing answers come out
of the teletype!  Is the teletype grounded?

["grounding is as grounding does":]

>( Imagine me making shooing motions with my hands, as I say... )
>Details, mere details.
>
>But to try to respond:
>        How much "correspondance" is needed?
>        How much detail is necessary; how many errors can be tolerated?
>
>This is, of course, the heart of the matter.  The only real answer I
>have is "about as much as a typical human doing something closely
>analogous, about as many as humans make in similar situations".   
>( How closely analogous?  How typical a human.....
>  Imagine me resuming my shooing motions at this point. )

Well, it's fun to have you rather than me on the defensive.  

Presumably we're talking about correspondences between symbols and reality
in order to allow things other than robots to be grounded.  (Robots are 
grounded in your sense and mine.)  But this very extension greatly 
complicates the notion of correspondence.  According to some folks, such as
Lakoff, the meaning of human words is inextricably tied up with a gestalt-
perception of the referent, modes of physical interaction with the referent
(e.g. a chair is something you *sit* in), and even cultural factors.  How 
exactly do we find correspondences with the human benchmark in computers 
without senses and without physical interaction with the world?  

>      (E.g., is a system that knows everything about
>       blocks and nothing about anything else grounded?)
>
>No (or "not well"), because the symbols wouldn't have as many
>interrelationships with other symbols refering to the world as those
>employed by a "typical" human.  

I agree with you here.

>Note that this notion of using a human as a benchmark-of-convenience is
>lifted straight from Harnad.  The only difference I know of being that
>I'm talking about the properties of the symbol system and its
>realization, and he's talking about the shape of the entity/environment
>boundary also.  (Assuming for the moment that I *understand* Harnad's
>position correctly, of course.)  It's basically going back to the
>original Turning test, but with the formal "contest" style rules left
>vague.

An incomprehensible move, to my mind.  The Turing Test is biased, to put
it mildly, in favor of verbal intelligence.  The standard defense is to say
that you can indirectly question other aspects of intelligence, or other
phenomena; but why one can't simply test these aspects directly, or 
examine the internals of the implementation, I can't imagine.

>: Does it really not matter
>: if there's no causal relationship between the real world and the 
>: internal model?
>
>Yes and no.  And maybe.
>
>On tuesdays, thursdays and saturdays, I think that the causal
>relationship is moot, and the "humongus lookup table", or even "massive
>coincidence" implementations of this "correspondence" indicates
>grounding.  On mondays, wednesdays and fridays, I think that, no, the
>causal link is necessary, otherwise it isn't grounding.  On sundays, I
>try to stop thinking about it so obsessively, because by that time I'm
>usually getting quite dizzy.

Now, now, you must learn to dignify your statements with jargon.  Your
T/Th/Sa position could be called "correspondence-based grounding."  Your 
M/W/F position could be called "causally-based grounding."  And your 
Sunday position is "existential ambivalence."  :)


