From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!uunet!news.univie.ac.at!ai-univie!georg Wed Sep 23 16:54:12 EDT 1992
Article 6956 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!uunet!news.univie.ac.at!ai-univie!georg
>From: georg@ai.univie.ac.at (Georg Dorffner)
Subject: Re: Grounding
Message-ID: <1992Sep17.172902.28419@ai.univie.ac.at>
Summary: why grounding - attempting to clarify
Sender: Georg Dorffner <georg@ai.univie.ac.at>
Nntp-Posting-Host: chicago.ai.univie.ac.at
Organization: Dept.Medical Cybernetics&Artificial Intelligence,Univ.Vienna,Austria,Europe
References: <20390@plains.NoDak.edu> <1992Sep16.203451.5162@spss.com> <1992Sep17.005009.17985@news.media.mit.edu>
Date: Thu, 17 Sep 1992 17:29:02 GMT
Lines: 132

In article <1992Sep17.005009.17985@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:
] In article <1992Sep16.203451.5162@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
] > In article <20390@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?) writes:
] >>  In an earlier thread, it was said that a computer based AI could
] >>  not be conscious because its inputs lacked grounding in the real
] >>  world.  The question is, what if we grounded it in a computer
] >>  system (say a UNIX system on the Internet).  Granted it may
] >>  be an incomprehensible intelligence, but would it qualify
] >>  as having its inputs solidly grounded in its environment
] >>  (and thus avoid that argument)?
] >
] >What folks who talk about "grounding in the real world" mean, I believe, is
] >that concepts acquire their meaning by virtue of an immense experience
] >of direct physical interaction with the real world.  This would not be
] >the case for an AI (merely) running under Unix and/or connected to the
] >Internet, so no, such a system wouldn't be grounded.
] 
] It seems to me that this "grounding" term is causing much mischief
] because of confusing several very different kinds of dependencies that
] really need separate terms or names.  One commonsense meaning of
] grounding has an image of a direct dependency, e.g., the reason I can
] stand here (and not sink into the earth) is because the "ground"
] supports me, continuously, from each moment to the next.
]
] Another meaning is that an infant learns about the world through an
] historical process of interaction between sensory inputs (and perhaps
] motor actions, although this is probably not so essential as has been
] rumored) and an internal; learning mechanism.  This is *not* a
] continuous causal relation; it may have happened in the past, but
] needs no continuation into the present or recent past.
]
] A third meaning is a more indirect form of causal "inheritance".
] Suppose I could make a (biological or functional, doesn't matter) copy
] of your brain that acts the same.  The copy never had that sort of
] interaction with the sensory world except, perhaps, in the momentary
] sense that it was copied ffrom something in the world.  This is a
] sense of grounding so indirect that the earthy term 'ground' makes
] mischief.  And finally, there is the hypothetical AI designed by a
] committee that engineers it to have an internal model of the world
] based, say, on some heuristically competent abstract theories of
] geometry and physics.  

To me the term "grounding" has always carried a large portion of its meaning
in electrical engineering (perhaps, because I'm not native English, perhaps
because EE was my original background): If an electrical conductor, or the
device it is built in, is not _grounded_ its voltage level is undefined, 
it sort of "floats in the air" (electrical mechanic slang where I come from). 
Thus, when this term is applied to symbols or conceptual structures the image of 
"undefined", therefore meaningless, "floating" little somethings pop up in my
mind immediately. Grounding in an AI system, in this image, means
defining the role (or meaning) of a symbol in an intelligent system
by relating it to the environment the intelligent agent is acting in
(in some sense, the "ground level"). The channels through which this 
happens (corresponding to electrical circuitry through which a voltage 
level is defined) are (in a nutshell) the sensory inputs and mechanisms 
of categorization. Furthermore (in contrast to the electrical metaphor), 
those channels are adaptive and not given, introducing the importance of 
the mentioned adaptive interactions or experiences.

Of course, this analogy is much too weak to make clear the importance of
the grounding problem in all its entirety. I mentioned this to
add another possible (partial) meaning to the list above.

] By the time we're done, there is virtually nothing in common to all
] these.  And this is why the discussions I've seen of "grounding" don't
] make any useful sense to me.
                   
Well, here's my attempt to clarify: The essence of the grounding problem, 
in my opinion, is not that you cannot consider mental states (the symbols) 
without having immediately visible causal relationship to any of the agent's 
experiences at hand. It is more that you cannot fully _understand_ their 
roles in the particular agent's cognition without taking their grounding 
_in that particular agent_ into account. Of course, when copying someone's 
brain into a functionally equivalent computer program or the like 
(let's suppose for the moment we really could) one could no longer really 
say that this new agent "has had experiences which helped it to ground 
its symbols". That's not the point. What the symbol grounding problem 
suggests is that one cannot _merely_ copy that agent's symbols (or concepts) 
into another agent without losing some or all of their functionality. 
In other words, in order to get the new agent to make use of the symbols 
in _exactly_ the same way as the original one, one needs to also copy 
_all_ of the original sensors plus the pathways between them and the symbols
(with all their imprinted weights, to use connectionist terms) - and I guess
that's what you, too, implicitly suggested above. Experiences or interactions
with the environment don't explain us how at any specific moment in time
sensors are connected to symbols, but they explain to us how the pathways 
were or could be established (other than through copying).

So what does that tell us for AI? It does not tell us that we could never
ever achieve a fully artificially intelligent system without caring about
an agent's experiences and the resulting grounding. However, at the moment
I would not know of any method - other than resorting in large parts to trial 
and error - of how to build in detail those said pathways between the sensors 
and the symbols. What that story does tell us is that viewing symbols as 
"to be grounded" through mechanisms that are a result from the agent's 
experiences will get us much farther in designing (or understanding) such
artificial agents. Thus the symbol grounding problem does not point to
a theoretical impossibility of building AI machines without explicit
account of grounding (implicitly, however, every successful machine will). 
Instead it gives us directions toward plausible and
efficient design of artificial cognitive systems using symbols. Those
directions tell us that we should emphasize on agents that - like us -
can see, hear and feel the world and act in it, and by this way acquire
concepts and meaningful symbols. And it tells us that we should not approach
symbols and conceptual knowledge divorced from any perceiving and acting agent,
because if we do it we will have the same problems as when trying
to copy (only) the symbols from one agent into another (as suggested above).

]  Too bad that Philosophy, so far as I
] know, has not evolved good terms for the necessary distinctions.
                                          
I'm not a philosopher, so probably I haven't helped in defining terms. But
I hope to have at least pointed out some "useful sense" of grounding.
                                                          
-------------------------------------------------------------------------------
Georg Dorffner                                            georg@ai.univie.ac.at

Dept. of Medical Cybernetics 			Austrian Research Institute for
and Artificial Intelligence			 	Artificial Intelligence
University of Vienna					        Schottengasse 3
Freyung 6/2                                              A-1010 Vienna, Austria
A-1010 Vienna, Austria
-------------------------------------------------------------------------------











