From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!mcnc!aurs01!cam Tue Jun  9 10:06:59 EDT 1992
Article 6084 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!mcnc!aurs01!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <60780@aurs01.UUCP>
Date: 4 Jun 92 16:05:56 GMT
References: <22133@castle.ed.ac.uk> <60758@aurs01.UUCP> <22243@castle.ed.ac.uk>
Sender: news@aurs01.UUCP
Lines: 61

>,>>> cam@castle.ed.ac.uk (Chris Malcolm)
>> throop@aurs01.UUCP (Wayne Throop) writes:

>>> In other words, grounding needs a continuous process of tracking, of
>>> adaptation. If you remove that, and simply depend on an initial
>>> calibration, you have a system which simply happens to be grounded
>>> now, and with luck will still be grounded tomorrow, but is destined to
>>> drift gradually out of registration with the world. An unguided
>>> missile, a ballistic rather than guided trajectory.

>>Actually, I like this notion of grounding quite a bit.  But computers
>>are fully grounded in this sense, are they not?

> Yes and no. Computers _are_ fully grounded with respect to those
> operations which they know how to do: [..but..], 
> if you think that the fact that computers are grounded with
> respect to their elementary operations leads fairly straightforwardly
> to the grounding of an AI program running in a computer,

Heavens no, anything BUT straightforward.  Eliza shows that the
"straightforward" path fails, I think.  But Eliza's use of
english symbols is ungrounded, not because computers CANNOT
ground symbols in their basic operations, but because the Eliza
program DOES NOT do so.

An AI program that grounds its high-level symbols would not be
"straightforward" or "trivial" any more than the process by
which a human does this (eg: the process of raising the arm
and extending the finger and uttering "By `foo' I mean that
thing right there!").  If anybody thinks the processes involved
in having a human do that are "trivial"... well.

But I don't see anything *inherently* different about a computer
grounding symbols by interacting with the world by keyboard,
logitech FotoMan (or ScanMan) and screen, printer, bell, or
mouse-maze robot, and a human pointing at an object.

> IMHO, however, "groundedness" is not a property which is automatically
> inherited from one level to another of the layers of virtual machinery
> necessary to construct a (pseudo)intelligent device: it depends too
> much on purposes, which are level-specific.

Yes, I very much agree.  But groundedness on level N can be turned into
groundedness on level N+epsilon (because *humans* turn the level of
muscle twitches and cone-and-rod firings into sense at higher levels),
and I see no particular reason to suppose that the processes of doing
so can't be computational.  Clearly, Eliza-like processes of doing so
aren't adequate, but it is far less clear that (say) Eurisko-like
processes (though Eurisko itself is nowadays (I think) thought to
be unpromising, I mean processes that keep emperical data and extensive
inter-symbol relationships around "with" each symbol) also aren't
adequate.

> the groundedness of the atomic operations of a computer does no more than
> guarantee it executing an algorithm properly.

Agreed.  Just as the "groundedness" of the humaniform interface does
no more than guarantee that humans follow the laws of physics,
chemistry, and so on.

Wayne Throop       ...!mcnc!aurgate!throop


