From newshub.ccs.yorku.ca!torn!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!yale!gumby!wupost!emory!sol.ctr.columbia.edu!eff!news.oc.com!spssig.spss.com!markrose Tue Nov 24 10:52:27 EST 1992
Article 7677 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!yale!gumby!wupost!emory!sol.ctr.columbia.edu!eff!news.oc.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov17.193945.1527@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Nov13.191936.7308@spss.com> <721879394@sheol.UUCP>
Date: Tue, 17 Nov 1992 19:39:45 GMT
Lines: 98

In article <721879394@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>Let me try to express clearly what I think Mark and I *do* agree on,
>and what I think we still disagree on.

If we come to agree on everything, will we cease to exist?

>First, I think we can agree that what we think of as "grounding", both
>static and dynamic requires causal, sensual experience of the situation
>an entity is said to be grounded in. 

Yup.

>I think, because of the ambiguity involved in the entity/environment
>boundary, it is reasonable to suppose that computers can be grounded
>by "borrowed" experience, and statically grounded as the results of
>borrowed and "predigested" experience.
>
>I further think that a standalone computer with horribly narrow
>bandwidth senses (such as keyboard and mouse) and motor skills
>(such as pixels on a screen) can be said to be at least statically
>grounded due to predigested experience of one form or another,
>and can plausibly maintain a minimal dynamic grounding, though
>of greatly degraded quality.
>
>As I understand it, Mark disagrees about the "borrowed" experience,
>partly agrees about the "predigested" experience,  and thinks that
>it would be possible to form a spectrum of "extent of groundedness"
>based on how closely the entity has control of the senses that
>supposedly ground it, and finally is skeptical that a "stand alone"
>computer could possible remain grounded, for reasons of
>sensory deprivation impacting its functioning.
>
>Is that a fair summary?

I'm not sure: I'm not clear on the "borrowed" vs. "predigested" distinction.

I think I could say that an AI's dynamic grounding varies (at least) with 
the breadth of its sensorimotor capacity and its control over the same; and 
that its static grounding depends on how integrated its sensorimotor capacity 
is with its architecture.  Both types of grounding can also be assumed to
degrade in a dysfunctional or insane entity.

>:: My claim is that the computer's download and perusal of GIFs from a
>:: library of grand canyon stills, [...]
>:: is ultimately just as much (though of lower bandwidth and with other
>:: limitations) a physical interaction with the canyon as that human's.
>: It's a physical interaction, yes, but to my mind it doesn't afford a lot of
>: grounding.  I'd consider a human being who's been to the Grand Canyon and
>: walked around in it to "know what he's talking about" (be grounded) much
>: more than someone who's only seen pictures of it.  But the latter person 
>: is much more grounded than someone who's only read about it.  
>
>I agree.  Note in this connection it seems that the current "grounding
>deficit" of the computer in this scenario is a "mere" limitation of the
>bandwidth and storage capacity of recording technology, not a
>fundamental deficit of computers because of their computer-ness. 

True; but as you improve the technology you're moving in the direction
of roboticity.

>: I'm not willing to say
>: that an AI system which simulates a blind paraplegic hermit in a cave
>: "has passed the Turing Test" (simply).  
>
>Ok, ok.  But an AI that has successfully simulated the capabilities
>of a blind paraplegic hermit in a cave with a breath-operated
>teletype would be very impressive, it seems to me.  At least
>potentially, there's plenty of behavioral complexity there to
>be impressed by.

Oh, I have a very healthy respect for any actual accomplishment in AI.
We both denigrated the groundedness of SHRDLU awhile back, for instance,
but Terry Winograd stands high in the Valhalla of AI, or would if he were dead.

>:: As far as "examining the internals", well... [...]
>: I thought we were talking about grounding, not intelligence.  But in any
>: case I just don't see why we wouldn't want to investigate the intelligence
>: of any system by inspection of its external behavior alone.  How long
>: would it take to explicate human intelligence if we submitted ourselves
>: to this restriction?
>
>Hmmmmm.  We may be talking past each other here, because my first
>impulse was to say "yes, I agree but [...]", and then (I discovered)
>replace the "[...]" part with exactly what I'd already said.
>
>In other words, I agree that there's no reason to avoid looking
>at the internals.  It's just that the inference of intelligence,
>and even groundedness, can't (yet) depend on the internals,
>since the efficacy of the internals is what we're trying to 
>infer in the first place.  (Hope the rephrase made it clearer.)

OK, I see what you're saying.  To test the hypothesis CI(x) -> G(x)
(certain internals imply grounding), we can't depend on a determination
of G(x) that simply reduces to CI(x).  But I don't think we're doing that.
For instance, we might define grounding as requiring sensory, causal,
high-bandwidth real-world experience.  Now we can evaluate G(x) by checking
S(x), C(x), HB(x), and we can test CI(x) -> G(x) without falling into
logical traps.


