From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!olivea!pagesat!spssig.spss.com!markrose Tue Nov 24 10:52:46 EST 1992
Article 7708 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!olivea!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov23.233708.13805@spss.com>
Date: 23 Nov 92 23:37:08 GMT
References: <721879394@sheol.UUCP> <1992Nov17.193945.1527@spss.com> <722143376@sheol.UUCP>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
Lines: 130

In article <722143376@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>I used "borrowed" to refer to "raw" sense information being subjected
>to internal processing, that is, use of sense "organs" not part of the
>entity in question.  Eg: a GIF of a scene being subjected to analysis
>to pick out objects and such, perhaps akin to a person viewing a photograph.
>
>I used "predigested" to refer to the results of such a processing step,
>as in a robot "getting grounded" and then using "undump" to create a
>process with the same static grounding on a computer.  There is no
>human action adequatly akin to this, but the notion would be something
>like a memory transplant.
>
>My claim is that computers could plausibly use borrowed and predigested
>grounding to acheive "groundedness levels" that humans can only acheive
>by "direct experience", and that any "akin to" examples therefore lead
>to an inaccurate intuition about "what it would be like" for a computer
>to be grounded in this way.  It seems to me that a computer could have
>a "groundedness level" from borrowed or predigested material on a par
>with the "groundedness level" of a human's direct experience, subject
>only to bandwidth and technology issues.
>
>I expect Mark disagrees with this.

I think I'd maintain that "borrowed" experience could lead to partial 
grounding.  It would lead to *some* grounding in the same way that you can
come to partly know what you're talking about, speaking of a city you've
never been to, by seeing pictures of it.  It would lead to *partial* grounding
for the same reason that the kittens who could see but not interact with
the world didn't function very well.

I'm more dubious about the "predigested" experience.  A robotic system
with senses and effectors is, let us say, grounded.  I don't see any reason
to assume that the subsystem consisting of its "digested" experience,
and any processing that gets done on it, is also grounded; and if this
subsystem isn't grounded in a robot it certainly isn't in a computer.

To put it another way: if you chop up a grounded system you're eventually
going to have a bunch of ungrounded pieces.  (As a reductio, individual
neurons aren't grounded.)  A given part of the system *might* be grounded;
but this cannot be assumed, it has to be demonstrated.  And if you divide
grounded system R into subsystems X and Y, you can't prove that X is grounded
by showing that Y is not: X and Y might both be ungrounded.

>: I think I could say that an AI's dynamic grounding varies (at least) with
>: the breadth of its sensorimotor capacity and its control over the same; and
>: that its static grounding depends on how integrated its sensorimotor capacity
>: is with its architecture. 
>
>I agree with the first (more or less, with probable disagreement
>lurking in the "control over" part), but disagree with the second.
>This whole thread, being named "grounding and the entity/environment
>boundary" as it is, is an attempt to show that the "integratedness" of
>sensorimotor capacities is not a good indicator of the groundedness
>of symbols, the knowing-what-I'm-talking-about-ness of an entity's
>use of symbols.
>
>Let me give a scenario to review my point.  Consider a room containing a
>sensor/effector cluster (camera, microphone, speaker, manipulators,
>etc), a computer, and a LAN connecting the two.  If we consider the
>"entity" to be the computer, then the symbols it uses to describe the
>room are (in Mark's model) ill-grounded, because the S/E cluster is not
>"integrated with its architecture".  But if we consider the "entity" to
>be the hardware in the room taken as a whole, then the very same symbols
>produced in the very same way are now well-grounded. 

I think we're interpreting "integrated" differently.  I mean it as a 
description of the system's architecture as a whole.  If in the setup you've
described the computer's software is designed around the S/E cluster,
such that its operation doesn't even make sense if it's not connected to it,
then the cluster is well integrated into the system, and in fact I'd be
inclined to describe the system as a robot.

Where you draw boundaries around the "entity" is indeed important; but I
don't see it as completely arbitrary.  You should look at the connections
and interdependencies within and outside the boundary you've drawn.
If the computer is well-integrated with the S/E cluster, then if you consider
the computer as your entity, there will be many strong connections across the 
boundary; lots of things inside the computer can't be explained without 
reference to the S/E cluster-- you've got a subsystem on your hands.
Viewing the system as a whole reduces outside connections to a minimum. 

I don't want to give the impression that I'm more certain about all this 
than I am.  If we achieve AI and try out some of the strange architectures
that our conceivable, our philosophical preconceptions are bound to be
shattered and rebuilt.  I would hold pretty strongly to the idea that
groundedness requires broad, long-term sensorimotor experience; somewhat less
strongly that the experience must be direct and maintainable.

>Specifically, consider the case of two computers and two S/E clusters
>in a room, all four things on a lan.  (A mathematician, of course,
>would smash one of the computers and one of the S/E clusters, thus
>reducing the problem to a case already solved... I'm not totally
>sure what the engineer and the physicist would do...)
>
>The two computers can trade off use of either or both of the S/E
>clusters.  In this situation, it becomes clumsy to keep track of which
>entity is which, if you insist that any grounded entity must be
>"integrated" with an S/E.  It seems very strongly attractive to model
>the situation as two grouneded entities (the computers) and two
>non-grounded S/E clusters.  The computers are the "people" here, and
>the S/E clusters just total-prosthesis suits.

Hmm.  My inclination is to say, continuing the assumption that the S/E units
are architecturally well integrated with the computer software, that there 
are indeed two grounded entities, each consisting of one computer plus both 
S/E clusters.  

It would be interesting to ask the computers (assuming they're AIs) what
their self-image was in this situation.  Do they say, "I have two bodies,
but I share them with my brother here", or "I have one body, this one here,
but my brother uses it sometimes, and I can use his", or "I am a disembodied
spirit, but I am able to use whatever body is available to me"?  Perhaps
there is no a priori answer-- it could depend on their programming.

>:: Note in this connection it seems that the current "grounding
>:: deficit" of the computer in this scenario is a "mere" limitation of the
>:: bandwidth and storage capacity of recording technology, not a
>:: fundamental deficit of computers because of their computer-ness.
>: True; but as you improve the technology you're moving in the 
>: direction of roboticity.
>
>I disagree.  Improvement in the sensors/effectors in no way implies
>that they are necessarily "part of the entity".  The difference between
>a robot and a computer is not really (or not only) the bandwidth of
>their interaction with the world, but whether (metaphorically) the
>peripherals are plugged into the backplane, or accessed over SCSI
>cabling (or a LAN).

I don't see why this is an important distinction.  How are you deciding
when a resource is part of the entity or not?


