From newshub.ccs.yorku.ca!torn!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!agate!doc.ic.ac.uk!uknet!mcsun!sunic!psinntp!psinntp!dg-rtp!sheol!throopw Tue Nov 24 10:52:10 EST 1992
Article 7651 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!agate!doc.ic.ac.uk!uknet!mcsun!sunic!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Summary: pause for calibration... then more on degree of grounding, and relevance of internals
Message-ID: <721879394@sheol.UUCP>
Date: 16 Nov 92 00:15:27 GMT
References: <1992Nov13.191936.7308@spss.com>
Lines: 87

:: From: throopw@sheol.UUCP (Wayne Throop)
:: But it is my claim that computers *do* have physical interaction
:: with the world, just of a narrower bandwidth than robots.  
: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Nov13.191936.7308@spss.com>
: And that's enough to establish a correspondence with the human benchmark,
: and thus groundedness?  If that's all it takes, it would seem that *any*
: computer system, not just an AI, is "grounded."

Now, now.  I didn't say that's *all* it takes, any more than Mark
said (far) earlier in this thread that high bandwidth is *all* it takes.
In addition to the "causal connection" (tm, bletch), some behavioral
and structural complexity (though see below about internals) is
required to match the facts of human interaction with the world.

Let me try to express clearly what I think Mark and I *do* agree on,
and what I think we still disagree on.

First, I think we can agree that what we think of as "grounding", both
static and dynamic requires causal, sensual experience of the situation
an entity is said to be grounded in. 

I think, because of the ambiguity involved in the entity/environment
boundary, it is reasonable to suppose that computers can be grounded
by "borrowed" experience, and statically grounded as the results of
borrowed and "predigested" experience.

I further think that a standalone computer with horribly narrow
bandwidth senses (such as keyboard and mouse) and motor skills
(such as pixels on a screen) can be said to be at least statically
grounded due to predigested experience of one form or another,
and can plausibly maintain a minimal dynamic grounding, though
of greatly degraded quality.

As I understand it, Mark disagrees about the "borrowed" experience,
partly agrees about the "predigested" experience,  and thinks that
it would be possible to form a spectrum of "extent of groundedness"
based on how closely the entity has control of the senses that
supposedly ground it, and finally is skeptical that a "stand alone"
computer could possible remain grounded, for reasons of
sensory deprivation impacting its functioning.

Is that a fair summary?

:: My claim is that the computer's download and perusal of GIFs from a
:: library of grand canyon stills, [...]
:: is ultimately just as much (though of lower bandwidth and with other
:: limitations) a physical interaction with the canyon as that human's.
: It's a physical interaction, yes, but to my mind it doesn't afford a lot of
: grounding.  I'd consider a human being who's been to the Grand Canyon and
: walked around in it to "know what he's talking about" (be grounded) much
: more than someone who's only seen pictures of it.  But the latter person 
: is much more grounded than someone who's only read about it.  

I agree.  Note in this connection it seems that the current "grounding
deficit" of the computer in this scenario is a "mere" limitation of the
bandwidth and storage capacity of recording technology, not a
fundamental deficit of computers because of their computer-ness. 

: I'm not willing to say
: that an AI system which simulates a blind paraplegic hermit in a cave
: "has passed the Turing Test" (simply).  

Ok, ok.  But an AI that has successfully simulated the capabilities
of a blind paraplegic hermit in a cave with a breath-operated
teletype would be very impressive, it seems to me.  At least
potentially, there's plenty of behavioral complexity there to
be impressed by.

:: As far as "examining the internals", well... [...]
: I thought we were talking about grounding, not intelligence.  But in any
: case I just don't see why we wouldn't want to investigate the intelligence
: of any system by inspection of its external behavior alone.  How long
: would it take to explicate human intelligence if we submitted ourselves
: to this restriction?

Hmmmmm.  We may be talking past each other here, because my first
impulse was to say "yes, I agree but [...]", and then (I discovered)
replace the "[...]" part with exactly what I'd already said.

In other words, I agree that there's no reason to avoid looking
at the internals.  It's just that the inference of intelligence,
and even groundedness, can't (yet) depend on the internals,
since the efficacy of the internals is what we're trying to 
infer in the first place.  (Hope the rephrase made it clearer.)
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


