From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!dtix!darwin.sura.net!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Wed Sep 16 21:23:50 EDT 1992
Article 6947 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!dtix!darwin.sura.net!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Message-ID: <1992Sep17.005009.17985@news.media.mit.edu>
Date: 17 Sep 92 00:50:09 GMT
References: <BuDr7y.1LA@usenet.ucs.indiana.edu> <20390@plains.NoDak.edu> <1992Sep16.203451.5162@spss.com>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 45
Cc: minsky

In article <1992Sep16.203451.5162@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <20390@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?) writes:
>>  In an earlier thread, it was said that a computer based AI could
>>  not be conscious because its inputs lacked grounding in the real
>>  world.  The question is, what if we grounded it in a computer
>>  system (say a UNIX system on the Internet).  Granted it may
>>  be an incomprehensible intelligence, but would it qualify
>>  as having its inputs solidly grounded in its environment
>>  (and thus avoid that argument)?
>
>What folks who talk about "grounding in the real world" mean, I believe, is
>that concepts acquire their meaning by virtue of an immense experience
>of direct physical interaction with the real world.  This would not be
>the case for an AI (merely) running under Unix and/or connected to the
>Internet, so no, such a system wouldn't be grounded.

It seems to me that this "grounding" term is causing much mischief
because of confusing several very different kinds of dependencies that
really need separate terms or names.  One commonsense meaning of
grounding has an image of a direct dependency, e.g., the reason I can
stand here (and not sink into the earth) is because the "ground"
supports me, continuously, from each moment to the next.

Another meaning is that an infant learns about the world through an
historical process of interaction between sensory inputs (and perhaps
motor actions, although this is probably not so essential as has been
rumored) and an internal; learning mechanism.  This is *not* a
continuous causal relation; it may have happened in the past, but
needs no continuation into the present or recent past.

A third meaning is a more indirect form of causal "inheritance".
Suppose I could make a (biological or functional, doesn't matter) copy
of your brain that acts the same.  The copy never had that sort of
interaction with the sensory world except, perhaps, in the momentary
sense that it was copied ffrom something in the world.  This is a
sense of grounding so indirect that the earthy term 'ground' makes
mischief.  And finally, there is the hypothetical AI designed by a
committee that engineers it to have an internal model of the world
based, say, on some heuristically competent abstract theories of
geometry and physics.  

By the time we're done, there is virtually nothing in common to all
these.  And this is why the discussions I've seen of "grounding" don't
make any useful sense to me.  Too bad that Philosophy, so far as I
know, has not evolved good terms for the necessary distinctions.


