From newshub.ccs.yorku.ca!torn!utcsri!rpi!think.com!ames!agate!iat.holonet.net!uupsi!psinntp!dg-rtp!sheol!throopw Wed Oct 14 14:58:26 EDT 1992
Article 7192 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!think.com!ames!agate!iat.holonet.net!uupsi!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Message-ID: <718611244@sheol.UUCP>
Date: 09 Oct 92 03:20:57 GMT
References: <26604@castle.ed.ac.uk> <1asq47INNr9o@smaug.West.Sun.COM> <1992Oct5.195433.9320@spss.com>
Lines: 214

: From: cam@castle.ed.ac.uk (Chris Malcolm)
: Message-ID: <26604@castle.ed.ac.uk>
: a history of sensory input is not
: enough to acquire and maintain a grounded symbol system. It is crucial
: that the sensory input was affected by effector output, in other
: words, developing the amalgam of sensing and action we call
: "behaviour" is the crucial thing. Because behaviour is an amalgam of
: sensing and action, to discuss grounding as though it were a fait
: accompli (rather than a continuous process), and largely in terms of
: how sensory input (rather than behaviour) is processed, leads to a
: rather fragile concept of grounding

I agree that this grounding stuff is (part of) a continuous process
involving both sense perception and motor action, and not just a static
thing.  The "causal structure" I talked about before IS a static thing,
and is related to "being grounded" in much the same way as an algorithm
is related to a process.  (In fact, it is very much the heart of the
strong AI position that the "causal structure" involved IS representable
as an algorithm, and that "being grounded" IS a computable process.)

Now, with that said, I still don't see that the history of how that
algorithm came to be, or even the details of how the process that
realizes the algorithm was instantiated, have anything to do with
whether the process itself "is grounded".

For example, is an aircraft pilot who has trained in a simulator
"grounded" in a real plane? I'd have to say yes (at least, in so far as
the simulated experience matched real experience).  The pilot can fly
the plane, and when the pilot says "look at the airspeed indicator", the
"airspeed indicator" symbol used is grounded. 

Now obviously, that pilot aquired the piloting capability by interaction
of senses and motor acts, so the example is weak.  As far as I know,
there are no strong examples that illustrate what I mean here.  To see
what I mean, one would have to take a pretty far out thought experiment,
such as assembling a pilot atom by atom (or maybe cell by cell),
building the nervous system (and musculature and everything else...  I'm
not talking brain-in-vat here) to be in the state it would have been in
after pilot training.  I'd still claim that the resulting pilot is
grounded.  (Well, no, that pilot can fly...  or maybe the pilot *should*
be grounded because of not being grounded.......  but I digress.)

Now, because of the far-out nature of the thought experiment, one can
argue endlessly over whether the resulting construct "is a pilot" in any
useful sense, or is just a "simulated pilot", or whether it is actually
possible to perform the construction, and on and on and on.  But I see
no creditable reason why sufficiently advanced technology could not
perform the construction at least, and the construct would really have
equivalent skill to a "real" pilot.  So I really think we are left with
the questions of "is this construct a pilot", and when the construct
says "hand me that flight plan over there", is the symbol "flight plan"
merely squeaks and squawks, or does (or can) the construct mean anything
by the utterance. 

As far as I'm concerned, the construct IS a pilot, and means the
usual things by the utterance.  The history (remember, this construct
was an unresponsive blob of goo moments ago) is irrelevant.

: From: dab@ism.isc.com (Dave Butterfield)
: Message-ID: <1asq47INNr9o@smaug.West.Sun.COM>
:: From: cam@castle.ed.ac.uk (Chris Malcolm)
:: it is a commonplace illusion among philosophy students that they _are_
:: entirely symbolic in their cognitive functions, e.g., the common
:: belief that is impossible to think without thinking in (something
:: like) words.
: Why do you say that this is an illusion?  A symbol is anything
: that represents meaning.  What sort of thinking do you suppose
: occurs *without* the use of symbols?

Well, from my perspective, symbols are in the eye of the beholder, so
even with (say) kinesthetic thinking of the sort that might go on as a
basketball player leaps and releases the ball on a trajectory intended
to go through the hoop, one *could* say that symbols are involved, that
certain states of the neuro-muscular system are "symbols" that
"represent" the ball, the hoop, etc.  But certainly, nothing like
"words" are used there. 

On the other hand, I imagine most people would just say that there is no
thinging going on there, and dismiss anything not using "words" as "not
thinking".  If the physical activity is prejudicial against calling the
calculation involved in this example "thinking", then I can only offer
my own subjective experience.  My most productive thinking occurs in
short, sharp, bursts of insight.  A lot of my activity is just putting
words to these thoughts "after the fact" as it seems to me.  

So...  what sort of thought goes on without words? I would say thought
related to visual, spatial, musical, kinesthetic, and similar
calculations is wordless.  But I may be prejudiced by my own subjective
experience, or I may not be understanding what Dave is getting at here. 
Eg, note many of these are or can be processed in the "left hemisphere"
regions "devoted to" language (see Sacks "Seeing Voices" for example),
so just what is meant by "words" or "language" is maybe clouding this
issue. 

: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Oct5.195433.9320@spss.com>
: But in all these examples what's happening, at some level, is that the
: computer retrieves a particular value from a particular memory location.

First, I think this underestimates the variety of methods of attaching
peripherals to CPUs.  For example, while in an antique Apple II, the
keyboard was a simple read from a memory location, in IBM PC and clones,
the keyboard is (I think) an array of scan codes.  On an old TRS80, even
key debounce had to be done in software, and don't even TALK to me about
the various horrors I've seen for dealing with N-key rollover in
software.  Things were VERY much more complicated than simply "retrieve
a value from a memory location".  Even something normally thought of as
cut-and-dried like serial IO using UARTS isn't universal: one of the
earliest methods of serial IO on that apple was to watch each signal
transition on what was normally a "game" port and simulate the UART in
software.  And finally, some IO architectures make it very problematical
to talk about "memory"...  memory mapped IO is common, but not
universal. 

Second, even in humans, "all that's happening at some level" is that
some neurons are firing, "all that's going on" is that the brain is
reacting to a neurochemical pulse on some nerve bundle or other. 
(Emphasis on the abstract, symbolic "firing" and "pulse", not on the
concrete "neuron" or "neurochemical".)

: True, the further meaning of that event can vary with the application.

In my view, the very phrase "further meaning" in this context is
misleading.  Deciding that the "primary meaning" is an "integer" in a
"memory location" is arbitrary.  I still claim that this "primary
meaning" is all in your mind, and has nothing to do with the computer as
a physical being, and perhaps nothing to do with the computer's mind's
eye (assuming for a moment that it has a mind). 

: (It seems to be beyond Searle, doesn't it, that a computer is not
: restricted to single symbolic system.)  

We can assume that Searle means something like the fact that there
IS a preferred symbol system in the computer, namely the one that
describes the physical process embodied by the computer.  But he
does seem to ignore that this "prefered" symbol system is nowhere
near the symbol system that people interact with the computer with,
and in fact the computer can juggle multiple symbol systems simultaneously.

Also, I see no strong reason to assume that there exists no symbol system
which describes the physical process embodied by a human.

: The problem is, however, that from
: the event the computer can learn effectively nothing about the keypress, or
: about keys, or about keyboards.

My contention is that the computer's knowledge can be innate, (or more
specifically, it can aquire knowledge about the objects and events that
it can discriminate via its keyboard/scanner/microphone/whatnot by way
of its ethernet port, disk drive, serial port, ROM, and so on). 

Compare this to a human: in order to orient via the senses and motor
actions, a primitive learning algorithm pre-exists, "wired in" to the
nervous system.  The difference between the human and the computer as I
see it is that the computer must depend on a-priori grounding much, much,
much more than a human does.  But again, I view this as a "mere"
quantitative difference, not a category difference. 

: The computer
: isn't grounded partly because its transduction is insufficient (it's not
: rich enough) and partly because its use of it is insatisfactory (it doesn't
: base its cognition on it).

I think an entity can remain grounded through (pretty much) an arbitrarily
small "bandwidth keyhole".  For example, consider a person who grows up
in some small town.  The person is grounded, knows the meaning of the
symbols refering to residents, buildings, locations, etc.  The person
moves to Timbuktu, and corresponds with a sibling that remains behind.
I claim that the person in Timbuktu is still grounded, and still means
things by the symbols sent to the stay-ah-home sibling.  Further, the
remote sibling even means things by the symbols used to refer to buildings
built after the move to Timbuktu.  The person was grounded, and then
*stays* grounded via a very small "symbolic" keyhole.

If we substitute "programmed with a grounding algorithm" for "grew up in
a small town" and "runs the AI process" for "moved to Timbuktu", (and
make all the other obvious substitutions) I think we still have a
grounded entity. 

Granted, one can argue that the sibling in Timbuktu maintains
grounding by virtue of the high-bandwidth interface to the surround
AT Timbuktu, and the remote grounding is parasitic on that. I agree
that this is possible, but I can envision ways to get the "parasitic"
grounding spoonfed to the computer also, so I am unconvinced that
this is a show-stopper.

: How *can* the entity discriminate objects if it lacks senses and a mass
: of experience with the senses and the objects?  What, besides experience,
: can provide any link between objects (meaning things outside the system)
: and the entity's internal structure?  Coincidence?

Well... yes, actually.  Though I would hesitate to call all the
hard work and such that will inevitably go into building the
initial grounding structure "coincidence".  More like "conspiracy".

In other words, the link between physical events outside the system
and the symbols inside the system is 1) the physical connection of the
lowest level of the computer TO the physical world, and 2) the structure
that relates models of all the entities IN the phyical world to
higher and higher level symbols (and to each other).

I will agree that computers have a very, very, very narrow physical
connection to the physical world compared to humans, but I'm not yet
convinced that this is central.  What seems central to me is the
structure of internal models. 

It seems clear to me that on a short timescale, and within reasonable
limits, the bandwidth of the senses doesn't seem to affect the "amount
of groundedness" (eg: I don't feel I mean less by the words I utter with
my eyes closed as compared with them open).  And while it's a wild
extrapolation, I don't think a longer timescale or wider variance of
bandwidth will be a fundamental problem. 
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


