From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!rpi!scott.skidmore.edu!psinntp!psinntp!dg-rtp!sheol!throopw Tue Nov 24 10:51:28 EST 1992
Article 7591 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!rpi!scott.skidmore.edu!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Summary: is it live, or is it memorex?
Message-ID: <721458261@sheol.UUCP>
Date: 11 Nov 92 02:11:43 GMT
References: <1992Nov10.022939.94607@Cookie.secapl.com>
Lines: 77

: From: frank@Cookie.secapl.com (Frank Adams)
: Message-ID: <1992Nov10.022939.94607@Cookie.secapl.com>
: (1) There is, in a purely information-theoretic sense, much more
: information in actually tasting marijuana than in anything or
: everything you have been told about it.  [...]
: (2) Humans are not so constructed as to be able to take verbal
: descriptions, however complete and accurate, and translate them into
: the equivalent memory structures generated for sensory experiences.
: So even if somebody could tell you exactly what it tasted like, you
: couldn't fully understand the explanation.  (This *is* a problem with
: your causal properties; one which an AI need not share.)

I largely agree with Frank, but would like to add some detail, and
use that detail to approach this from a different perspective.

What occurs to me is that humans *can* do this to a much greater
degree than they do.  They can circumvent the bandwidth issue in (1)
above by being very precise and verbose in what they say (though
ultimately, there will remain some bandwidth problems, but bear with
me).  They can circumvent the conversion-to-sense-impressions issue in
(2) by turning the descriptions back into sense impressions by
manipulating their environment.  In essence, I propose an extension of
the "finding out what it's like to be blind by spending many hours or
days with a blindfold on" technique.

For example, when I was discussing a computer downloading a GIF (which
is, from one point of view, a symbolic description of a visual scene),
it occured to me that a human could do much the same thing.  Let's say
that person A visits the grand canyon, and while there, phones person
B.  As A and B talk, B wants to know what A sees.  Well, A could erect
a grid, and use a highly directional photometer, and read off to B
values for pixels in the scene A is watching.  B writes them all down,
and then gets out a paint-by-numbers kit and turns the symbolic
description back into something that can be processed through the
retina and the various visual centers of the brain.

Various human-suited optimizations are possible to attempt to do better
on the bandwidth issue, such as describing polygons instead of pixels
(to support the paint-by-numbers paradigm better), or even describing
the scene as a series of paint-strokes.

For the marijuana case, at least in theory, the physiological reactions
of A could be recorded, sent to B, and B could reproduce them (maybe
via biofeedback, maybe via neurochemicals, maybe via electrical
stimulation of the brain).

There's also an interesting issue around the phrase "even if somebody
could tell you exactly what it tasted like" (or the scene looked like,
or what the city sounds like).  And that is, the "level" of the
description.  Humans are used to the phrase meaning something like
"tell me how you reacted to the scene at a very high level", as in
did the taste make you feel good or bad, the scene make you feel
small or cold, or did the sound keep you awake or not, and so on.
For humans to actually describe the taste, scene, or sound itself
instead of describing vague reactions to the taste, scene, or sound,
is VERY rare.  (Though we can find counterexamples, such as musicians
discussing a musical passage, or perfumers discussing odor, or
the "objective" taste jargon used in taste testers for new food
products, or photographers or artists discussing a scene.  Perhaps
to a certain extent police descriptions of suspects.)

The point here is that people don't normally describe things at a low
enough level to allow the recipient to even *imagine* what it would be
like to experience these sensations.  I think such precise descriptions
would allow people to symbolically ground each other far better than
they do now, but it isn't clear to me whether it's practical.  

But practicality aside, it also seems possible for such a scheme to
"backfire", and have this "shared grounding" end up being processed
neurologically by the language centers rather than the visual (or
whatnot) centers.  It might well end up altering the way both parties
to such a communication view the whole notion of groundedness in the
area under discussion.  That is, rather than "sharing groundedness" at
a more fundamental level neurologically, they'll just export that sort
of groundedness into the linguistic realm.
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


