From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!bloom-beacon!eru.mt.luth.se!lunic!sunic2!mcsun!uknet!edcastle!cam Tue Jun  9 10:06:44 EDT 1992
Article 6063 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!bloom-beacon!eru.mt.luth.se!lunic!sunic2!mcsun!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <22243@castle.ed.ac.uk>
Date: 3 Jun 92 18:09:19 GMT
References: <22133@castle.ed.ac.uk> <60758@aurs01.UUCP>
Organization: Edinburgh University
Lines: 60

In article <60758@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>-> cam@castle.ed.ac.uk (Chris Malcolm)

>-> Grounding is not just an initial calibration. The world changes,
>-> sensors age, robots get fatter and slower (if you see what I mean :-).
>-> In other words, grounding needs a continuous process of tracking, of
>-> adaptation. If you remove that, and simply depend on an initial
>-> calibration, you have a system which simply happens to be grounded
>-> now, and with luck will still be grounded tomorrow, but is destined to
>-> drift gradually out of registration with the world. An unguided
>-> missile, a ballistic rather than guided trajectory.

>Actually, I like this notion of grounding quite a bit.  But computers
>are fully grounded in this sense, are they not?

Yes and no. Computers _are_ fully grounded with respect to those
operations which they know how to do: perform sundry arithmetic and
logical operations on binary numbers, plus some control accessories.
There _is_ perfectly good inherent semantics (rephrase to your taste)
here. Consequently -- in this very limited sense -- there _is_ some
semantics in a computer running an AI program, and in Searle running
the Chinese Room program. 

This is the essence of the "English reply" to the Chinese Room reply:
Searle _does_ understand how to interpret the sub-English programming
language he interprets. Given this small but significant leakage of
semantics into the supposedly meaning-free Chinese Room, the English
reply then suggests that maybe this can _somehow_ be used to bootstrap
up some more semantics. It goes on to suggest that maybe this
"somehow" is no more than designing a good old-fashioned AI program.

Now, if you think that the fact that computers are grounded with
respect to their elementary operations leads fairly straightforwardly
to the grounding of an AI program running in a computer, then you ipso
facto not only accept the English Reply, but consider that the
"somehow" above is trivial. (I mention this to give you an opportunity
to change your mind :-)

IMHO, however, "groundedness" is not a property which is automatically
inherited from one level to another of the layers of virtual machinery
necessary to construct a (pseudo)intelligent device: it depends too
much on purposes, which are level-specific.

For example, let us imagine a completely demented individual, still
capable of feeding himself and crossing the road without getting run
over, but living in a paranoid fantasy world which renders ordinary
communication impossible. I think it quite appropriate to say of such
a person that they are still grounded with respect to the physical
world, but have lost touch -- lost grounding -- with respect to the
social world.

But if groundedness is level-specific in descriptions of the layers of
virtual machinery comprising a creature, then the English reply at
best needs considerable explication of its "somehow", and the
groundedness of the atomic operations of a computer does no more than
guarantee it executing an algorithm properly.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


