From newshub.ccs.yorku.ca!torn!utzoo!helios.physics.utoronto.ca!utcsri!rpi!psinntp!psinntp!dg-rtp!sheol!throopw Thu Oct  8 10:10:35 EDT 1992
Article 7057 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utzoo!helios.physics.utoronto.ca!utcsri!rpi!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Summary: "what grounding means to me"
Message-ID: <717734119@sheol.UUCP>
Date: 29 Sep 92 02:01:04 GMT
References: <1992Sep25.160149.26882@spss.com> <717645108@sheol.UUCP> <1992Sep28.165125.6660@spss.com>

: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Sep28.165125.6660@spss.com>
: One would be hard pressed to say that [..a computer..]
: even knows what clicking the keys
: feels like, since its experience has none of the information density of
: human touch.

No, it is the *keys* that are hard pressed.    <ahem>

More seriously, in terms of "information density", consider that my
computer has a scanner.  In fact, it is no longer odd to have a computer
with a scanner, auditory inputs (for voice recognition), and so on and
on.  So if "information density" of the senses is all that people are
saying computers lack, I'd say its a mere quantitative difference,
easily resolved.  (I'd even include things like ethernet connectors and
the such, but I realize that this would be even more controversial.)

Remember, my point was that computers are every bit as embedded in the
world as humans are (albeit with "narrow" sensory bandwidth typically),
and that the closure of a key contact, or the decay of a CCD pixel in a
scanner is no more "symbolic" than a human skin pressure neuron firing,
or a human rod or cone being triggered.  This point still holds, I think. 

On the other hand, I agree that it is unlikely that any current computer
can reasonably be said to know what *anything* "feels like", but I still
don't see that as a problem of the computer being "ungrounded" in
principle.  I also agree that current computer symbol systems are
ungrounded in fact, but (in one possible model of the mind) that's only
because the structure that connects what we interpret as the computer's
symbols to the computer's experience isn't a part of a typical
computer's functioning. 

So, I don't see that computers (as opposed to robots) *can't* be
grounded, but I certainly agree that I doubt any current computer *is*
grounded.  Even current *robots* are barely grounded, if I can treat the
notion of grounding as a spectrum for a moment.  Of course, I seem to be
a heretic, in that I consider a robot to be merely a computer with a
certain class of IO devices (that is, all robots *are* computers, but
not all computers are robots). 

Filling in with concrete images for a moment, consider robots ranging
from a disney animatron, through a robot on an assembly line, through
the most advanced robots used to study, eg, bipedal gait, general object
manipulation, and so on.  The animatron is less grounded than a
bacteria, the worker is perhaps as grounded as a worm or a very simple
insect, and the research robot is perhaps as grounded as a complicated
insect, or a simple dry-land vertibrate.  I see no reason why, right
now, a computer happening to incorporate a fairly high resolution
scanner could not be as well grounded as the most advanced research
robot now existing, albeit in a very different way in practical terms. 

: How is it that words mean anything, in your view?

In my view, words *don't* mean anything.  The whole notion of words
"having meaning" I find profoundly misleading.  Speakers mean things by
the words they utter, and listeners infer meanings from the words they
hear, but the words are just noises having patterns subject to recognition. 

Meaning, it seems to me, has to do with the internal models of
thinking entities, and very little to do with the words.
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


