From newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!psinntp!psinntp!dg-rtp!sheol!throopw Tue Nov 24 10:52:38 EST 1992
Article 7696 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <722143376@sheol.UUCP>
Date: 19 Nov 92 00:34:49 GMT
References: <1992Nov13.191936.7308@spss.com> <721879394@sheol.UUCP> <1992Nov17.193945.1527@spss.com>
Lines: 116

:: From: throopw@sheol.UUCP (Wayne Throop)
:: Message-ID: <721879394@sheol.UUCP>
:: Is that a fair summary?
: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Nov17.193945.1527@spss.com>
: I'm not sure: I'm not clear on the "borrowed" vs. "predigested" distinction.

I used "borrowed" to refer to "raw" sense information being subjected
to internal processing, that is, use of sense "organs" not part of the
entity in question.  Eg: a GIF of a scene being subjected to analysis
to pick out objects and such, perhaps akin to a person viewing a photograph.

I used "predigested" to refer to the results of such a processing step,
as in a robot "getting grounded" and then using "undump" to create a
process with the same static grounding on a computer.  There is no
human action adequatly akin to this, but the notion would be something
like a memory transplant.

My claim is that computers could plausibly use borrowed and predigested
grounding to acheive "groundedness levels" that humans can only acheive
by "direct experience", and that any "akin to" examples therefore lead
to an inaccurate intuition about "what it would be like" for a computer
to be grounded in this way.  It seems to me that a computer could have
a "groundedness level" from borrowed or predigested material on a par
with the "groundedness level" of a human's direct experience, subject
only to bandwidth and technology issues.

I expect Mark disagrees with this.

: I think I could say that an AI's dynamic grounding varies (at least) with
: the breadth of its sensorimotor capacity and its control over the same; and
: that its static grounding depends on how integrated its sensorimotor capacity
: is with its architecture. 

I agree with the first (more or less, with probable disagreement
lurking in the "control over" part), but disagree with the second.
This whole thread, being named "grounding and the entity/environment
boundary" as it is, is an attempt to show that the "integratedness" of
sensorimotor capacities is not a good indicator of the groundedness
of symbols, the knowing-what-I'm-talking-about-ness of an entity's
use of symbols.

Let me give a scenario to review my point.  Consider a room containing a
sensor/effector cluster (camera, microphone, speaker, manipulators,
etc), a computer, and a LAN connecting the two.  If we consider the
"entity" to be the computer, then the symbols it uses to describe the
room are (in Mark's model) ill-grounded, because the S/E cluster is not
"integrated with its architecture".  But if we consider the "entity" to
be the hardware in the room taken as a whole, then the very same symbols
produced in the very same way are now well-grounded. 

It seems to me that this is a bad feature of a groundedness model.

Now, it's true that the "groundedness" is a property of *an* *entity*'s
use of symbols, and the two cases involve *different* entities, so
the model isn't self-contradictory, or inconsistent or any such thing.
I'm just saying it's not a useful model, and that a more useful model
would have a level-of-groundedness function that tracked the properties
of the symbol system somewhat more closely than the properties of the
entity employing it.

Specifically, consider the case of two computers and two S/E clusters
in a room, all four things on a lan.  (A mathematician, of course,
would smash one of the computers and one of the S/E clusters, thus
reducing the problem to a case already solved... I'm not totally
sure what the engineer and the physicist would do...)

The two computers can trade off use of either or both of the S/E
clusters.  In this situation, it becomes clumsy to keep track of which
entity is which, if you insist that any grounded entity must be
"integrated" with an S/E.  It seems very strongly attractive to model
the situation as two grouneded entities (the computers) and two
non-grounded S/E clusters.  The computers are the "people" here, and
the S/E clusters just total-prosthesis suits.

  [.. long comparisons involving S/E=telephone, computer=person-on-phone
      and S/E=theatrical-costumes, computer=persons-trying-out-roles
      deleted for reasons of analogy-exposition overdose ..]

:: Note in this connection it seems that the current "grounding
:: deficit" of the computer in this scenario is a "mere" limitation of the
:: bandwidth and storage capacity of recording technology, not a
:: fundamental deficit of computers because of their computer-ness.
: True; but as you improve the technology you're moving in the 
: direction of roboticity.

I disagree.  Improvement in the sensors/effectors in no way implies
that they are necessarily "part of the entity".  The difference between
a robot and a computer is not really (or not only) the bandwidth of
their interaction with the world, but whether (metaphorically) the
peripherals are plugged into the backplane, or accessed over SCSI
cabling (or a LAN).

: To test the hypothesis CI(x) -> G(x)
: (certain internals imply grounding), we can't depend on a determination
: of G(x) that simply reduces to CI(x).  But I don't think we're doing that.
: For instance, we might define grounding as requiring sensory, causal,
: high-bandwidth real-world experience.  Now we can evaluate G(x) by checking
: S(x), C(x), HB(x), and we can test CI(x) -> G(x) without falling into
: logical traps.

But S, C, and HB can all be examined without inspecting internals.
(Perhaps I'm just emphatically agreeing...)

I can see that examining the S part becomes arbitrarily difficult as we
consider arbitrary or exotic entity/environment boundaries (the
question of just what is a "sense" arises, among other problems).  And
the issue of predigestion in static grounding is an issue that looking
at internals might help.  But in broad strokes, I think it's still the
case that the only reason for inspecting internals is to rule out
cheating of various kinds.

Anyway, I agree that inspecting  S, C, and HB are reasonable things,
and don't create any logical traps of the sort I was worried about.
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


