From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!decwrl!netsys!pagesat!spssig.spss.com!markrose Tue Nov 24 10:51:19 EST 1992
Article 7576 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!decwrl!netsys!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov10.223018.13195@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Oct30.143242.8130@news.media.mit.edu> <1992Oct30.195251.9573@spss.com> <1992Nov10.022939.94607@Cookie.secapl.com>
Date: Tue, 10 Nov 1992 22:30:18 GMT
Lines: 44

In article <1992Nov10.022939.94607@Cookie.secapl.com> frank@Cookie.secapl.com 
(Frank Adams) writes:
>In article <1992Oct30.195251.9573@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>I tend to equate grounding with the folk notion of "knowing what you're
>>talking about."  If I talk about something I haven't directly experienced,
>>like marijuana, I could be accused of "not knowing what I'm talking about";
>>in this case my statements could be meaningful, but ungrounded.
>>
>>I assume grounding is most important to those (like Harnad and George Lakoff)
>>who see meaning as derived from direct real-world experience.  I suppose it
>>wouldn't much interest those who identify "meaning" merely with logical sense
>>or reference, or with the magic of causal properties.  On the other hand, I
>>don't know how these folks would explain the dubiousness of my comments
>>about the taste of marijuana, since there is presumably nothing wrong with
>>my sense, reference, or causal properties.
>
>There are two not-unrelated problems:
>
>(1) There is, in a purely information-theoretic sense, much more information
>in actually tasting marijuana than in anything or everything you have been
>told about it.  So you *can't* have comparable grounding compared to someone
>who has tasted it.
>
>(2) Humans are not so constructed as to be able to take verbal descriptions,
>however complete and accurate, and translate them into the equivalent memory
>structures generated for sensory experiences.  So even if somebody could
>tell you exactly what it tasted like, you couldn't fully understand the
>explanation.  (This *is* a problem with your causal properties; one which
>an AI need not share.)

Right you are; both of these points are good reasons why humans need direct 
experience with things to talk about them meaningfully.

Neither point is explained, so far as I can see, by theories of meaning in
which the world is neatly divided into objects with properties, and words name
classes of objects sharing a set of properties.  The vast amount of data
we take in through the senses is largely irrelevant under this approach:
the only information relevant to the meaning of the word "dog" is the
properties shared by all dogs and those which distinguish dogs from non-dogs.
Nor is there a difference in meaningfulness, under this approach, between 
statements rooted in experience and those based only on "verbal descriptions", 
or even those merely deduced from other statements.

"Causal properties" is a mantra rather than a theory, so I'll leave it alone.


