From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!pipex!ibmpcug!ibmpcug!slxsys!uknet!edcastle!cam Sun May 31 19:04:38 EDT 1992
Article 5960 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!pipex!ibmpcug!ibmpcug!slxsys!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <21986@castle.ed.ac.uk>
Date: 28 May 92 21:11:56 GMT
References: <1992May25.214006.29965@Princeton.EDU> <1992May26.022413.14151@mp.cs.niu.edu> <1992May26.031148.27458@news.media.mit.edu>
Organization: Edinburgh University
Lines: 129

In article <1992May26.031148.27458@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:

>SO, RATHER THAN EMPHASIZE "GROUNDING IN THE PHYSICAL WORLD" -- OR
>ARGUE WHETHER SIMULATION CAN EVER BE MORE THAN A PALE APPROXIMATION --
>LET'S TAKE THE OPPOSITE VIEW.  THE WORLD IS BASICALLY ILL-STRUCTURED
>DRECK. WHATEVER GROUNDING "IS" (AND I DOUBT THAT THIS CONCEPT HAS
>MUCH VALUE) IT WILL TURN OUT IN THE END TO BE A SECOND RATE WAY TO
>LEARN TO 'UNDERSTAND' THE WORLD.

"Symbol grounding", like "causal powers" :-) is in danger of becoming so
subtle and sophisticated a concept that the problem it was meant to
address becomes obscured by attempts to define the auxiliary concept.

The basic problem which "symbol grounding" addresses is that mentioned
by Newell and Simon in their explanation of the Physical Symbol System
Hypothesis in "Computer Science as Empirical Enquiry": the fact that
symbols _mean_ something, point to something, or, as N&S put it,
_designate_.

    Designation: An expression designates an object if, given the
    expression, the system can either affect the object itself, or
    behave in ways depending on the object.

One can quibble with the adequacy of the definition, but there is no
doubt about what they intend :-) [Of course, N&S were far from the first
to be concerned with this problem, e.g. the philosopher Brentano
(1838-1917).]

"Symbol grounding" refers to the problem of how it can be arranged that
the symbols get securely tied to their real-world referents (we can knit
up abstract concepts later once this basic grounding is established).
Almost all of the discussion of this sees this as two problems. The
first problem is how you make the first attachment, e.g., "see, that
there is a DOG". The second problem is maintaining the concept, which
involves correcting mistakes, refining it, etc. If one contrives an
allegedly AI system in which the first problem is solved by the designer
building it in, and the second by switching the system off before it
drifts too far out of registration with reality, the groundedness and
hence "intelligence" is quite properly rather doubtful.  Very awkward
philosophical problems arise because of the ever-present possibility of
mistakes -- can one properly claim that one's concept of "dog" is
properly grounded just in that moment when one mistakes a bundle of
clothing in the twilight for a sleeping dog? The whole idea threatens to
evaporate in a vain quest for the Best Current Approximation to the One
True Model of the Universe and similar chimerae. These problems arise
because of the state-based viewpoint our traditional symbolic computer
systems encourage.

The important point is that a grounded symbol system, inherent
semantics, intentionality, or whatever one likes to call it, is
exhibited by an agent using its symbols to achieve behaviour related to
the things symbolised. The behaviour of any biological agent complex
enough for us to harbour a suspicion that there might be "someone at
home" is sufficently complex that it is maintained by large numbers of
control systems continuously monitoring the environment and the internal
state of the creature. For example, while typing this, I maintain my
vertical posture partly by watching the walls of the room out of the
corner of my eyes -- all unconsciously -- as can be demonstrated by
arranging to move them, whereupon I might become sufficently
disorientated to miss the keys I am trying to type; I might even fall
off my chair.

In other words, I'm suggesting that mentalistic terminology can only
appropriately be applied to an agent which -- quite apart from the
internal symbolic machinery -- also has purposes, and is engaged in
interacting with an environment in which it has a history. "Wind-up
toys", as Brooks calls them, don't qualify. Of course I certainly don't
want to imply that this is _all_ there is to having intentionality :-),
I merely wish to suggest that _at_least_ this much is required.

Just as you could not properly call a computer without appropriate
connection to a chess board (or some suitable simulacrum) a chess
player, so you cannot say of anything short of a symbol-using agent, in
an environment, with a history, and purposes, that it has a properly
grounded symbol system. Because it has a history, and purposes, and is
controlling its behaviour and its perceptions, it is, over time,
maintaining appropriate connections between its symbols and _its_ world.
Making mistakes now and then doesn't matter at all; in fact it is one of
the hallmarks of an efficient system that it jumps to the safest
conclusions as fast as possible in order to stay alive, and hang the
occasional retrospectively foolish fear.

For convenience let us call such an agent a "situated agent"; so, mental
terminology (and symbol grounding) only applies to situated agents.

Consequently (by our usual intuitions) the Chinese Room fails, as does a
bottled brain, and -- as Searle asserts -- any (running) computer
computer program you like. The missing magic ingredient is situation:
being historically knitted into its world by interactive processes (that
are still active). Just as death is a complex process, and the precise
line between life and death hard to define, yet this poses no general
problem (pedants excepted) for the concept of "life", so the fact that a
situated (and therefore symbolically grounded) symbol-using agent which
has been excised from its environment will suffer from the progressive
decay of its groundedness (and therefore no secure line can be drawn
between grounded and ungrounded here) offers no threat to the notion.

Whether the world in which the agent is situated is the "real" physical
world, a simulation of it, or something I invented last year from old
beer-cans and logic chips, doesn't matter a damn, nor do the
transducers, the analogue/digital question, etc.: the nub which all
these are reaching for is "situatedness". If we call this being *alive*
(as distinct from being "alive", i.e. one of God's own creatures) then
there is no reason why appropriately constituted and situated computer
viruses could not be *alive*, nor indeed why our own minds could not
host *alive* "parasitic" memetic machines.

[I have deliberately postponed the questions of *alive*="alive"? and
whether "internal symbolic machinery" is an appropriate description (at
one level) of (part of) a mind.]

This leaves AI as something you could install in a (situated) robot, but
never in a computer. AI could exist in _part_ of a computer, however: if
the computer were divided into an appropriately connected world and
agent, then we would (as is our misleading habit) call the agent
"artificially intelligent". Misleading, because strictly speaking
"mind" is not a property of a creature alone, but only of the
dynamic interknitted complex of creature/world. We got into this silly
habit because we take our world for granted just as does a fish the
water. 

The TTT has this going for it: it recognises that it is impossible to
contrive an isolated examination which can determine whether an
allegedly intelligent system actually understands anything at all. As
students know, one can always fool the examiner :-)
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


