From newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!think.com!spool.mu.edu!olivea!uunet!mcsun!sunic!psinntp!psinntp!dg-rtp!sheol!throopw Mon Nov  9 09:36:20 EST 1992
Article 7470 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!think.com!spool.mu.edu!olivea!uunet!mcsun!sunic!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Summary: proposed slogan: grounding is as grounding does
Message-ID: <720582638@sheol.UUCP>
Date: 1 Nov 92 00:28:08 GMT
References: <markrose.720385670@spssig>
Lines: 93

: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <markrose.720385670@spssig>
: I see no problem with one robot acquiring its grounding from another.
: I am not sure I see what it means, however, for a computer to acquire
: grounding from a robot.  A huge proportion of the human brain is devoted
: to sense interpretation and motor control, and surely the remainder is
: so to speak built on top of them.  

True, but the spatial, visual, and other sense-related processing that
goes on in the human brain goes on even when (say) the eyes are closed.
Most popularly, REM sleep, but also in terms of certain kinds of memory
and the sort.  Clearly, the computer can use the "interpretation" part
without the "sense" or "motor" part just as well as a human can.

: Why should we expect the computer, which 
: interacts with the world not at all or only by teletype, to be able to use
: a mass of knowledge which is designed for robotic real-world experience?

For the same reason that Steven Hawking is able to use the mass of
knowledge aquired before he was incapacitated by ALS to "ground" his
thinking.  Granted, Hawking is left with the visual sense, but really,
other than that his situation in quite parallel.  And a computer can
be fed images over a lan about as well as Hawking can view images
that others bring before him.  And while Hawking can move his wheelchair
about in the world, I don't see how this is much different from
a computer "moveing about" in the world by soliciting images from
various sources available to it via lan.

: If the computer's algorithm is not designed around real-world interaction,
: I don't see that it can be clearly described as grounded; if it is, it is
: like the man confined to a sensory deprivation tank: in danger, I would
: think, of going insane.

This really seems too much like handwaving to me.  It sounds like simply
trying to think "what it would be like" for oneself to be a computer,
rather than based on any particular model of the situation.  At least, I
don't understand this point well enough to respond to my satisfaction,
though I will say that Hawking doesn't seem to be insane, though I feel
very uncomfortable trying to imagine what it would "be like" to be in
his situation. 

: >I can think of ways to make things more objective, like "a symbol
: >system is potentially grounded if there exists a physical system
: >including the symbol system that can be considered to have
: >wide-bandwidth senses which 'directly experience' the world", or some
: >such.  
: Actually that's not bad.  The "potentially" and "can be considered" 
: are a bit waffly though.  

True.  I inserted them to wave my hands at specific problems I see.
The "potentially" is because a robot has all the direct experience
one could wish, and can build fairly arbitrary symbol systems, but
I think no current robots really *are* as grounded as, say, mammals.

The "can be considered" is because I don't think ANY physical system
IS a symbol system... it is just viewed that way.  I might have sounded
less waffly by saying something like "the physical system that
realizes the symbol system" or some such.

I dunno.  I'm still most attracted to the "grounding is as grounding
does" sort of definition.  That is, a symbol system is grounded if the
physical system that realizes it reliably organizes its symbols in ways
that correspond to "the world".  How it does this is ITs business,
and irrelevant to the question of whether it IS grounded.

: You mean, are our statements about computer vs. robot cognition grounded?  
: Arguably not, since we don't have any experience with real AIs.
: Both of us may be very embarrassed to re-read our remarks in 30 years.

Well, I'm often embarrassed to read my remarks much sooner than that,
but I try not to let it depress me unduely.

But no, I was more thinking of the ambigous cases, where the same
symbols either are meaningful or not depending on the entity/environment
boundary chosen.  That is, the fuzziness in the entity/environment bound
doesn't seem to co-vary with the fuzziness in the intuitive "is
grounded" property, which makes it hard to treat groundedness as a
property of the symbol system, or of its physical realization.  The
"grounding is as grounding does" definition above does a bit better for
me here, because no matter where you draw the bound, the symbols are
grounded well if they are organized well acrost that particular bound
(that is, I've sort of cheated and FORCED the two to co-vary...). 

: All I am insisting on, really, is that human intelligence developed for
: specific purposes in the real world, and remains rooted in that world.  If 
: this weren't true of an AI its intelligence if any would be very unhumanlike.
: This is, however, heresy for the hoary but influential philosophical position
: that human reason is transcendant, abstract, and uncontaminated by nature or 
: culture.  

I agree, but don't see how this leads to some of the other points made.
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


