Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!gatech!news-feed-1.peachnet.edu!news.netins.net!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Grounding Representations: ("Grounding" is the wrong word)
Message-ID: <D7LrKB.76u@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <departedD5xB4A.544@netcom.com> <3lkl8d$2gm@percy.cs.bham.ac.uk> <3lkrpq$kun@mp.cs.niu.edu> <3nhlk5$i7o@percy.cs.bham.ac.uk>
Date: Tue, 25 Apr 1995 18:04:57 GMT
Lines: 83

In article <3nhlk5$i7o@percy.cs.bham.ac.uk>,
Aaron Sloman <A.Sloman@cs.bham.ac.uk> wrote:
>rickert@cs.niu.edu (Neil Rickert) writes:
>>..I expect that most people take "grounding" as
>> figurative, so I doubt that they assume grounding to require a
>> concrete real world object.
>
>Well, if you are right then most of the people I have heard talking
>about this are more sophisticated than they sounded to me. In my
>experience of discussing this topic (mainly not with professional
>philosophers) people normally regard it as blindingly obvious that
>symbols somehow get their meaning for an agent via the agent's
>sensory contact with the referent. (Such people don't normally think
>about the meaning of words like "and", or "quark", or "eleven".).

Problems with words like "and" do nothing to show that words like "dog" 
or "arm" don't get their meaning via sensory experience.  And words like 
"dog" and "arm" vastly predominate over words like "and".

Nor do words like "quark" really discredit the grounding hypothesis:
quite the opposite.  The hypothesis would lead us to expect that words far 
removed from direct sensory experience are more difficult to grasp, and I 
think this is precisely what we find.  What exactly is a quark?  Is it a 
wave or a particle?  The very question represents an attempt to manhandle 
the concept closer to the more tractable realm of everyday experience.

>When challenged about their ability to talk about unicorns or events
>that occurred before they were born, or scientific unobservables, or
>mathematical abstractions, they tend to claim that somehow all these
>things can be defined in terms of words whose meanings are grounded
>in sensory contact. (As far as I know philosophers who have tried
>such conceptual reductionism in any detail have eventually admitted
>defeat.)

Such examples can defeat only some kind of barefoot experientialist who 
denies the existence of metaphor, synecdoche, abstraction, and other
items in the cognitive arsenal.  Do you seriously think unicorns are some
kind of embarrassment for the grounding hypothesis, that defining the
meaning of unicorns in terms of horns and horses is impossible?
As for mathematical abstractions, Lakoff derives (for example) our
understanding of sets from our experience with actual containers, which
seems reasonable to me.

>The more indirect (and overloaded) the causal links between
>representations and referents, the more the meaning depends on
>structure not causation. In humans I believe structure dominates,
>and causal links serve merely to reduce ambiguity of reference
>(which can never be completely eliminated).
>
>The structure of our internal information states is so rich, and the
>architecture that uses them is so complex that the bulk of human
>meaning comes from the interaction of structure and manipulation.

And why do you think that?  I agree with you about the complexity of 
our information structures; but I think the volume of sensory experience
linked to them is also immense.  We spend many years accumulating (and
of course organizing) that experience; and present-day AIs seem stupid
not due to any failure of reasoning skills or insufficiency of data
structures, but because they lack that wealth of experience.

But even if you were right, and the "structures" overwhelm the "experience"
in complexity, even that does not dispose of grounding.  The word "dog"
may be linked to huge masses of purely conceptual information, from 
reading, talking, or reasoning; but it's still linked to actual experiences
with dogs (and with other animals, with fur, with our own bodies, etc.--
the experiences that lend meaning for me to words like "llama", although 
I've only seen real llamas a few times).

>(b) In fact it may turn out easier to design and implement a
>disembodied (or perhaps I should say "disconnected") mathematician
>whose mind is concerned with nothing but problems in number theory
>(and who enjoys the thrill of discovery and experiences the sorrow
>of refutation) than it is to design and implement a robot with
>properly functioning eyes, ears, arms, legs, etc.

>Anyhow the important thing is not to speculate about what is
>possible, but to get on and do it, or find out exactly why it is
>impossible. So let's have a go at designing the mathematicion.

But hasn't that already been done, e.g. with Lenat's EURISKO?  What
(besides the proper function of eyes, ears, arms, etc., which you seem
to dismiss as uninteresting) does the mathematician have that EURISKO 
doesn't?
