From newshub.ccs.yorku.ca!torn!utcsri!rutgers!psinntp!psinntp!dg-rtp!sheol!throopw Thu Oct  8 10:11:14 EDT 1992
Article 7116 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Summary: small exploration of the relationship of grounding to the senses
Message-ID: <718221542@sheol.UUCP>
Date: 4 Oct 92 23:39:30 GMT
References: <717645108@sheol.UUCP> <1992Sep28.165125.6660@spss.com> <717734119@sheol.UUCP> <1992Sep29.234928.15758@spss.com>
Lines: 105

: From: markrose@spss.com (Mark Rosenfelder)
: Message-ID: <1992Sep29.234928.15758@spss.com>
::  From: throopw@sheol.UUCP (Wayne Throop) 
::  So if "information density" of the senses is all that people are
::  saying computers lack, I'd say its a mere quantitative difference,
::  easily resolved.
: It is *not* just a quantitative difference.  You get your knowledge of (say)
: cats and cherries almost entirely through your senses.  The computer
: connected only to keyboard and monitor does not; what knowledge it has
: about them is learned indirectly and symbolically.  

Ah, I wasn't clear.  Specifically, I was going from Harnad's position
(with which I agree) that it doesn't matter in this context how a
capacity is aquired, all that matters is the capacity.  In this case, I
take the capacity to be the ability to "discriminate, categorize, and
discuss objects" (which is where I differ with (and don't understand)
Harnad's position). 

So, in terms of the senses, I still think it is, a "mere" quantitative
difference.  Note there is this quantitative difference in both the
sensory richness, *and* in the richness of the internal models that
(at least in my view of grounding) ground symbols in senses.  I'm
mainly talking about the former.  Certainly I don't suppose that
the latter is "mere".

: make these things [.. that is, rich sensors and effectors ..]
: and not symbolic input the
: main vehicle for the computer's interaction with the world-- and you have 
: a robot, which can be said to be grounded.

I would contend that one could make the sensors and effectors as
rich as one likes, and *still* lack grounding.  Grounding is in the 
(I dislike the term; nevertheless) causal structure that relates the
symbols to the senses. 

: A keypress is not inherently symbolic, but the computer's use of it is.
: It encodes the input of a particular character.  

What?  That certainly doesn't match my experience.  Computers sometimes
(to use a primitive but concrete example) respond to the keypress of "j" 
by (say) inserting the letter "j" into an edit buffer, and sometimes by
moving the cursor left one character in the edit buffer.  Or sometimes
keys are treated as musical notes, or as thrust on a simulated lunar
lander.  Sometimes the press and release events are treated separately
(as in a morse code program).  And sometimes things get *really* strange.

Now, you can claim that "really" the computer treated that keypress as a
letter in all cases, but in some cases letter was further interpreted. 
But that doesn't fit the cases where the keypress is treated separately
from the release.  So that's just *one* interpretation of what went on. 
"In reality", the assignment of meaning and the interpretation of the
physical events to which a computer is subjected is no more inherrently
symbolic than is the meaning and interpretation of the physical events
to which a human is subjected. 

Symbols are in the minds of beholders, and ONLY there.  (I still claim.)

: The computer only cares
: about the character, and indeed doesn't care that the character came from
: a keyboard rather than a touchscreen or a punch card or a file.

No, the computer DOES care...  it is only certain programs running on
the computer that don't care.  In fact, one of the big advantages touted
by early unix fans is that unix ignores these differences instead of
other programs needing to worry abount them.  So, your current
perception that "the computer [...] doesn't care" where characters come
from is a carefully crafted illusion, intended to enhance the usefulness
of computers to humans as symbol-manipulating engines.  It has nothing
to do with the physical reality of the compuer, only of our mental
models of the symbols we take them to represent. 

: So I don't see that the physicality of the keypress helps the computer
: get itself grounded.

As per above, I don't think that physicality *does* automatically ground
things.  That is, in fact, the position I'm arguing *against* by saying
that 1) humans are grounded, 2) computers currently aren't, and 3)
humans and computers are equally physical beings.  (Sadly, I agree that
I've not always been very clear in pushing this point of view, sometimes
phrasing it as "computers are grounded" when I really meant something
more like "computers are grounded if physicality is all there is to
grounding.)

Grounding, it still seems to me, can't be due to "transduction" or
"non-symbolic-ness" or whatnot, because humans and computers are
equivalent on these grounds.  It is only a persuasive illusion that
computers are "all symbolic" and humans have "non-symbolic" natures. 
The illusion is persuasive because myriads of hard-working and
intelligent hardware and software engineers have labored to perfect 
this illusion. 

: what is it about
: an entity which allows it to mean things rather than manipulate symbols?
: (For me it's the entity's huge mass of direct experience with the world, fully 
: integrated with its symbolic processing.  I can see robots possessing
: this, but I'm not sure about computers.)

For me, it is the structure of the entities internal models that allow it
to discriminate, categorize, and discuss objects.  Current computers
aren't grounded because they *can't* do so well enough.

Whether the structure which leads to this capability itself can be
modeled as a symbolic structure is (it seems to me) irrelevant.
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


