From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!rpi!scott.skidmore.edu!psinntp!psinntp!dg-rtp!sheol!throopw Tue Nov 24 10:51:28 EST 1992
Article 7590 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!rpi!scott.skidmore.edu!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <721458252@sheol.UUCP>
Date: 11 Nov 92 02:11:38 GMT
References: <720582638@sheol.UUCP> <1992Nov3.181425.8089@spss.com> <720937346@sheol.UUCP> <1992Nov9.221842.18550@spss.com>
Lines: 106

: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Nov9.221842.18550@spss.com>
:: The point is, the brain is processing away using
:: the sensual and motor processing centers, despite being in nature's
:: sensory deprivation tank.
: I'd be more comfortable with this if we knew exactly what dreams *are*.

Agreed.  The indications are very tenative, but this, coupled with a
few suggestive cognative strategies studies, seems to indicate to me
that memorized and imagined scenes are at least partly processed by
"borrowed" sensory processing.  I would be totally unsurprised for this
to be wrong, but I find it a very interesting possibility.

: For instance, one theory is that [dreams] are the brain's attempt 
: to make sense of random neural activity.  

I've heard the notion advanced that the "tunnel of light" near death
experiences are the brain's attempt to make sense of fading, random
neural activity, with the situation explored by computer simulations of
what the visual cortex would make of plausible near-failure modes of
neural activity.  Just a little digression...

:: For this inquiry, hypothetical
:: frustration and distress (as the Borg say) are irrelevant.
: Well, I *said* I was wandering.  Still, the original question was why
: a computer with a robot's database but no robotic capacity might be
: less than grounded, and if it's dysfunctional in such a situation that
: seems to me to be a good answer.

Ah, I think I follow better now.  But I find the cases of physical
disability (or inability) and mental disfunction (or nonfunction)
distinct, and see no reason to suppose the latter on account of the
former.  That is, I *still* don't see that an entity would be
*necessarily* traumatized by the experience.

As you say, somebody will just have to try it...

: The point [..of giving the computer a phone..] is that where a robot
: could use its own grounding, this hypothetical system borrows that of a
: human.  Suppose the system is being Turing tested.  It's frequently
: stumped for an answer; that's when it calls up the human, and with his
: help works out passable answers.  I don't think that grounds the
: computer.

Ah.  I agree.  I thought you were using the phone as a source of
grounding information, not as a simple crib sheet.

: The extreme example is the teletype by which the "control human" in
: the Turing Test communicates with the judges.  Lo, TT-passing answers
: come out of the teletype!  Is the teletype grounded?

Nope, because it's using a crib sheet.  Now, if the teletype could
"store up" information, and then deal with novel situations without
further help from the control human...

: How exactly do we find correspondences with the human benchmark in
: computers without senses and without physical interaction with the
: world?

But it is my claim that computers *do* have physical interaction
with the world, just of a narrower bandwidth than robots.  And even
that bandwidth limitation might not be so (or even be reversed) if
we consider that the computer's dynamic grounding can be based in
information drawn at very high rates over a lan or wan.

My claim is that the computer's download and perusal of GIFs from a
library of grand canyon stills, and using that as a basis for a
discussion with somebody who has visited the actual canyon, is
ultimately just as much (though of lower bandwidth and with other
limitations) a physical interaction with the canyon as that human's.

:: It's basically going back to the original Turning test, but with the
:: formal "contest" style rules left vague.
: An incomprehensible move, to my mind.  The Turing Test is biased, to
: put it mildly, in favor of verbal intelligence.  The standard defense
: is to say that you can indirectly question other aspects of
: intelligence, or other phenomena; but why one can't simply test these
: aspects directly, or examine the internals of the implementation, I
: can't imagine.

My point in eliminating the "context" style rules was precisely so that
"we can simply test these aspects directly".  But if the entity being
tested doesn't *have* (say) sight or (again say) hands, Harnad's TTT
then throws up its metaphorical hands and gives an "undecideable"
answer.  My proposal would simply proceed by adapting the "good as a
human" test to be "as good as a human with this limitation".  But if
the computer had a scanner, we could directly test visual
comprehension.  If it had a mouse, we could play "pong" with it.
Whatever.

Of course, this poses problems, since humans with as severe a
sensory/motor deficit as current computers are (to put it mildly) rare.
But I still find the notion worthwhile despite this.

As far as "examining the internals", well... the whole point is to help
answer the question of whether these internals are sufficient to
support the inference of "intelligent" from their behavior.  Thus,
examining them seems irrelevant (other than to rule out cheating by
some equivalent of the "phone" from above, what we might call the
chess-playing-dwarf ploy from the classical case of this).

( Of course, classing teleoperation with an physically enclosed
  human is an appeal to the "how did you get the musicians in
  that little box" model of radio... )
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


