From newshub.ccs.yorku.ca!torn!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sgiblab!a2i!pagesat!spssig.spss.com!markrose Thu Oct  8 10:10:45 EDT 1992
Article 7070 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!sgiblab!a2i!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Grounding
Message-ID: <1992Sep29.234928.15758@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <717645108@sheol.UUCP> <1992Sep28.165125.6660@spss.com> <717734119@sheol.UUCP>
Date: Tue, 29 Sep 1992 23:49:28 GMT
Lines: 63

In article <717734119@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>: From: markrose@spss.com (Mark Rosenfelder)
>: One would be hard pressed to say that [..a computer..>
>: even knows what clicking the keys
>: feels like, since its experience has none of the information density of
>: human touch.
>
>No, it is the *keys* that are hard pressed.    <ahem>

Grin.

>More seriously, in terms of "information density", consider that my
>computer has a scanner.  In fact, it is no longer odd to have a computer
>with a scanner, auditory inputs (for voice recognition), and so on and
>on.  So if "information density" of the senses is all that people are
>saying computers lack, I'd say its a mere quantitative difference,
>easily resolved.  (I'd even include things like ethernet connectors and
>the such, but I realize that this would be even more controversial.)

It is *not* just a quantitative difference.  You get your knowledge of (say)
cats and cherries almost entirely through your senses.  The computer
connected only to keyboard and monitor does not; what knowledge it has
about them is learned indirectly and symbolically.  Not only is the input 
bandwidth overwhelmingly smaller, but it's not used in the same way.  The 
computer does not use the keyboard to learn about keys.  It does not use
the monitor to learn about monitors.

Once you start talking about scanners and audio input you move a little bit
(not very much) toward robotics.  Keep moving-- add more senses, more
bandwidth, manipulators-- make these things and not symbolic input the
main vehicle for the computer's interaction with the world-- and you have 
a robot, which can be said to be grounded.

>Remember, my point was that computers are every bit as embedded in the
>world as humans are (albeit with "narrow" sensory bandwidth typically),
>and that the closure of a key contact, or the decay of a CCD pixel in a
>scanner is no more "symbolic" than a human skin pressure neuron firing,
>or a human rod or cone being triggered.  This point still holds, I think. 

A keypress is not inherently symbolic, but the computer's use of it is.
It encodes the input of a particular character.  The computer only cares
about the character, and indeed doesn't care that the character came from
a keyboard rather than a touchscreen or a punch card or a file.
So I don't see that the physicality of the keypress helps the computer
get itself grounded.

>: How is it that words mean anything, in your view?
>
>In my view, words *don't* mean anything.  The whole notion of words
>"having meaning" I find profoundly misleading.  Speakers mean things by
>the words they utter, and listeners infer meanings from the words they
>hear, but the words are just noises having patterns subject to recognition. 
>
>
>Meaning, it seems to me, has to do with the internal models of
>thinking entities, and very little to do with the words.

Yes, of course.  Let me state my question more precisely: what is it about
an entity which allows it to mean things rather than manipulate symbols?

(For me it's the entity's huge mass of direct experience with the world, fully 
integrated with its symbolic processing.  I can see robots possessing
this, but I'm not sure about computers.)


