From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!pagesat!spssig.spss.com!markrose Tue Nov 24 10:52:00 EST 1992
Article 7637 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov13.191936.7308@spss.com>
Date: 13 Nov 92 19:19:36 GMT
References: <720937346@sheol.UUCP> <1992Nov9.221842.18550@spss.com> <721458252@sheol.UUCP>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
Lines: 62

In article <721458252@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>From: markrose@spss.com (Mark Rosenfelder)
>: How exactly do we find correspondences with the human benchmark in
>: computers without senses and without physical interaction with the
>: world?
>
>But it is my claim that computers *do* have physical interaction
>with the world, just of a narrower bandwidth than robots.  

And that's enough to establish a correspondence with the human benchmark,
and thus groundedness?  If that's all it takes, it would seem that *any*
computer system, not just an AI, is "grounded."

>My claim is that the computer's download and perusal of GIFs from a
>library of grand canyon stills, and using that as a basis for a
>discussion with somebody who has visited the actual canyon, is
>ultimately just as much (though of lower bandwidth and with other
>limitations) a physical interaction with the canyon as that human's.

It's a physical interaction, yes, but to my mind it doesn't afford a lot of
grounding.  I'd consider a human being who's been to the Grand Canyon and
walked around in it to "know what he's talking about" (be grounded) much
more than someone who's only seen pictures of it.  But the latter person 
is much more grounded than someone who's only read about it.  

>:: It's basically going back to the original Turning test, but with the
>:: formal "contest" style rules left vague.
>: An incomprehensible move, to my mind.  The Turing Test is biased, to
>: put it mildly, in favor of verbal intelligence.  The standard defense
>: is to say that you can indirectly question other aspects of
>: intelligence, or other phenomena; but why one can't simply test these
>: aspects directly, or examine the internals of the implementation, I
>: can't imagine.
>
>My point in eliminating the "context" style rules was precisely so that
>"we can simply test these aspects directly".  But if the entity being
>tested doesn't *have* (say) sight or (again say) hands, Harnad's TTT
>then throws up its metaphorical hands and gives an "undecideable"
>answer.  My proposal would simply proceed by adapting the "good as a
>human" test to be "as good as a human with this limitation".  

That's fine, so long as the qualification isn't elided.  The AI without
visual capabilities can only pass a *limited* TTT.  What I don't accept
is the notion that it doesn't matter if the test is only limited.  Until
we have a really good theory of cognition in hand, I'm not willing to say
that an AI system which simulates a blind paraplegic hermit in a cave
"has passed the Turing Test" (simply).  

(Not that you're saying so, necessarily; but some people seem to.)

>As far as "examining the internals", well... the whole point is to help
>answer the question of whether these internals are sufficient to
>support the inference of "intelligent" from their behavior.  Thus,
>examining them seems irrelevant (other than to rule out cheating by
>some equivalent of the "phone" from above, what we might call the
>chess-playing-dwarf ploy from the classical case of this).

I thought we were talking about grounding, not intelligence.  But in any
case I just don't see why we wouldn't want to investigate the intelligence
of any system by inspection of its external behavior alone.  How long
would it take to explicate human intelligence if we submitted ourselves
to this restriction?


