From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank Fri Oct 30 15:17:49 EST 1992
Article 7410 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: Grounding
Message-ID: <1992Oct27.210053.60670@Cookie.secapl.com>
Date: Tue, 27 Oct 1992 21:00:53 GMT
References: <1992Oct5.195433.9320@spss.com> <718611244@sheol.UUCP> <26863@castle.ed.ac.uk>
Organization: Security APL, Inc.
Lines: 45

In article <26863@castle.ed.ac.uk> cam@castle.ed.ac.uk (Chris Malcolm) writes:
>It would be possible for me to build a robot with fixed knowledge
>about the world which was -- to begin with -- quite grounded: it would
>behave appropriately, its internal symbols would properly refer, etc..
>But because its knowledge can't change, yet the world does, and its
>sensors and perceptions inevitably make mistakes sometimes, mismatches
>between its beliefs and the world will gradually build up. It will
>gradually lose touch with reality. It will gradually become less and
>less grounded.
>
>I don't like calling such a creature "grounded". I would rather say
>that it is, by coincidence (an intended coincidence), capable of
>pretending to be grounded for a while. In other words, I would prefer
>not to call "grounded" anything which lacks the capacity to maintain
>its state of groundedness.
>
>That's where history comes in: things which are grounded in the sense
>of being capable of maintaining a state of groundedness are creatures
>designed to have histories. They don't need to have a history; but
>they must be capable of having a history -- of development and
>adaptation. It is probably the case that a creature capable of this kind of
>self-calibrating grounding can far more easily become grounded by
>mucking around in the world than by being fed a ready-grounded
>database and inference engine.

Not necessarily.  If want to make and sell a model of self-aware robot to
perform some task (assuming that this technology has become possible), the
easiest way to do it is probably to get one prototype grounded in this way,
and then copy its knowledge into each of the others.

>If that is the case, it is most likely
>that anything we encounter which is grounded in my self-calibrating
>sense will already have a history, since that would be the easiest way
>for it to become as it is.
>
>My contention is that restricting the concept of "grounded" to having
>this kind of self-calibrating groundedness avoids a lot of the
>paradoxes of a concept of "grounded" which is just a property of a
>system, rather than a combination of property and capacity to maintain
>the property.

One problem with this is that it doesn't give us any good way to talk about
this underlying property.  Whereas the other way, your property is at worst
"grounded with the ability to maintain it" -- which can easily be
abbreviated to "dynamically grounded".


