From newshub.ccs.yorku.ca!torn!utcsri!rutgers!gatech!destroyer!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose Thu Oct  8 10:11:18 EDT 1992
Article 7122 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!gatech!destroyer!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Message-ID: <1992Oct5.195433.9320@spss.com>
Date: 5 Oct 92 19:54:33 GMT
References: <717734119@sheol.UUCP> <1992Sep29.234928.15758@spss.com> <718221542@sheol.UUCP>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
Lines: 115

In article <718221542@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>Ah, I wasn't clear.  Specifically, I was going from Harnad's position
>(with which I agree) that it doesn't matter in this context how a
>capacity is aquired, all that matters is the capacity.  In this case, I
>take the capacity to be the ability to "discriminate, categorize, and
>discuss objects" (which is where I differ with (and don't understand)
>Harnad's position). 
>
>So, in terms of the senses, I still think it is, a "mere" quantitative
>difference.  Note there is this quantitative difference in both the
>sensory richness, *and* in the richness of the internal models that
>(at least in my view of grounding) ground symbols in senses.  I'm
>mainly talking about the former.  Certainly I don't suppose that
>the latter is "mere".

OK-- with reservations; see below.

>: make these things [.. that is, rich sensors and effectors ..]
>: and not symbolic input the
>: main vehicle for the computer's interaction with the world-- and you have 
>: a robot, which can be said to be grounded.
>
>I would contend that one could make the sensors and effectors as
>rich as one likes, and *still* lack grounding.  

No problem here either; "rich sensors and effectors" are, to me, necessary
but not sufficient conditions for grounding.

>Grounding is in the 
>(I dislike the term; nevertheless) causal structure that relates the
>symbols to the senses. 

Causal properties!! Aaagh!!!

>: A keypress is not inherently symbolic, but the computer's use of it is.
>: It encodes the input of a particular character.  
>
>What?  That certainly doesn't match my experience.  Computers sometimes
>(to use a primitive but concrete example) respond to the keypress of "j" 
>by (say) inserting the letter "j" into an edit buffer, and sometimes by
>moving the cursor left one character in the edit buffer.  Or sometimes
>keys are treated as musical notes, or as thrust on a simulated lunar
>lander.  Sometimes the press and release events are treated separately
>(as in a morse code program).  And sometimes things get *really* strange.

But in all these examples what's happening, at some level, is that the
computer retrieves a particular value from a particular memory location.
True, the further meaning of that event can vary with the application.
(It seems to be beyond Searle, doesn't it, that a computer is not 
restricted to single symbolic system.)  The problem is, however, that from 
the event the computer can learn effectively nothing about the keypress, or 
about keys, or about keyboards.  

>: The computer only cares
>: about the character, and indeed doesn't care that the character came from
>: a keyboard rather than a touchscreen or a punch card or a file.
>
>No, the computer DOES care...  it is only certain programs running on
>the computer that don't care.  In fact, one of the big advantages touted
>by early unix fans is that unix ignores these differences instead of
>other programs needing to worry abount them.  So, your current
>perception that "the computer [...] doesn't care" where characters come
>from is a carefully crafted illusion, intended to enhance the usefulness
>of computers to humans as symbol-manipulating engines.  It has nothing
>to do with the physical reality of the compuer, only of our mental
>models of the symbols we take them to represent. 

Yes, I went too far here-- I should have stuck with the keyboard.

>: So I don't see that the physicality of the keypress helps the computer
>: get itself grounded.
>
>As per above, I don't think that physicality *does* automatically ground
>things.  That is, in fact, the position I'm arguing *against* by saying
>that 1) humans are grounded, 2) computers currently aren't, and 3)
>humans and computers are equally physical beings.  (Sadly, I agree that
>I've not always been very clear in pushing this point of view, sometimes
>phrasing it as "computers are grounded" when I really meant something
>more like "computers are grounded if physicality is all there is to
>grounding.)

Thanks for the clarification; we are not as far apart as I thought.
I'd agree with all you say here.

>Grounding, it still seems to me, can't be due to "transduction" or
>"non-symbolic-ness" or whatnot, because humans and computers are
>equivalent on these grounds.  It is only a persuasive illusion that
>computers are "all symbolic" and humans have "non-symbolic" natures. 
>The illusion is persuasive because myriads of hard-working and
>intelligent hardware and software engineers have labored to perfect 
>this illusion. 

How about if I put it this way: Transduction by itself is not enough; it 
has to fit into the rest of the system in the right way.  The computer
isn't grounded partly because its transduction is insufficient (it's not
rich enough) and partly because its use of it is insatisfactory (it doesn't
base its cognition on it).

>: what is it about
>: an entity which allows it to mean things rather than manipulate symbols?
>: (For me it's the entity's huge mass of direct experience with the world, fully 
>: integrated with its symbolic processing.  I can see robots possessing
>: this, but I'm not sure about computers.)
>
>For me, it is the structure of the entities internal models that allow it
>to discriminate, categorize, and discuss objects.  Current computers
>aren't grounded because they *can't* do so well enough.

I'll grant that the internal structure is necessary, but I see the physical
experience as necessary too.

How *can* the entity discriminate objects if it lacks senses and a mass
of experience with the senses and the objects?  What, besides experience,
can provide any link between objects (meaning things outside the system)
and the entity's internal structure?  Coincidence?


