From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!usc!elroy.jpl.nasa.gov!ames!olivea!netsys!pagesat!spssig.spss.com!markrose Mon Nov  9 09:36:39 EST 1992
Article 7501 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!usc!elroy.jpl.nasa.gov!ames!olivea!netsys!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov3.181425.8089@spss.com>
Date: 3 Nov 92 18:14:25 GMT
References: <markrose.720385670@spssig> <720582638@sheol.UUCP>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
Lines: 63

In article <720582638@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>: From: markrose@spss.com (Mark Rosenfelder)
>: A huge proportion of the human brain is devoted
>: to sense interpretation and motor control, and surely the remainder is
>: so to speak built on top of them.  
>
>True, but the spatial, visual, and other sense-related processing that
>goes on in the human brain goes on even when (say) the eyes are closed.
>Most popularly, REM sleep, but also in terms of certain kinds of memory
>and the sort.  Clearly, the computer can use the "interpretation" part
>without the "sense" or "motor" part just as well as a human can.

It would be interesting to know if the sensory and motor areas of the brain
are really dormant during REM sleep.  Any neurologists out there?

>: If the computer's algorithm is not designed around real-world interaction,
>: I don't see that it can be clearly described as grounded; if it is, it is
>: like the man confined to a sensory deprivation tank: in danger, I would
>: think, of going insane.
>
>This really seems too much like handwaving to me.  It sounds like simply
>trying to think "what it would be like" for oneself to be a computer,
>rather than based on any particular model of the situation.  At least, I
>don't understand this point well enough to respond to my satisfaction,
>though I will say that Hawking doesn't seem to be insane, though I feel
>very uncomfortable trying to imagine what it would "be like" to be in
>his situation. 

But Hawking isn't a man in a sensory deprivation tank.  He retains his
vision, hearing, and I don't know what other senses; it's his motor ability
that's shot.  So his case has nothing to tell us about the sanity of someone 
deprived of all sensorimotor capacity.

"Insane" might have been too strong a word; try "dysfunctional."  Wouldn't
a human or an AI designed for interaction with the real world experience
frustration and distress when deprived of that interaction?  Only if it has 
emotions, of course; but I don't share the common assumption that emotion
is unimportant or even deleterious to cognition.  I can conceive of an AI
without emotions, but its consciousness would be very different than ours,
and it would be ungrounded with respect to large realms of human experience.

But I'm wandering.  Back to the computer, without sensorimotor capacity,
but endowed with the database of a grounded robot.  Is it grounded?  I think
it depends on what it does with that database.  Simple access to the
database isn't *in itself* proof of grounding.  This might be clearer with
an example: picture a computer which has no direct experience with the world,
and indeed no very great intelligence, but which does have a telephone with 
which it can call up a human being, who will answer any questions it 
may have about the world.  It has access to a grounded system; does that 
make it grounded?  I don't think so; not that in itself.

>I'm still most attracted to the "grounding is as grounding
>does" sort of definition.  That is, a symbol system is grounded if the
>physical system that realizes it reliably organizes its symbols in ways
>that correspond to "the world".  How it does this is ITs business,
>and irrelevant to the question of whether it IS grounded.

I see problems with this proposal too.  How much "correspondance" is needed?
How much detail is necessary; how many errors can be tolerated?  How broad
must the correspondance be?  (E.g., is a system that knows everything about
blocks and nothing about anything else grounded?)  Does it really not matter
if there's no causal relationship between the real world and the internal
model?


