From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!spool.mu.edu!uunet!secapl!Cookie!frank Tue Nov 24 10:51:05 EST 1992
Article 7560 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!spool.mu.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov10.022939.94607@Cookie.secapl.com>
Date: Tue, 10 Nov 1992 02:29:39 GMT
References: <markrose.720385670@spssig> <1992Oct30.143242.8130@news.media.mit.edu> <1992Oct30.195251.9573@spss.com>
Organization: Security APL, Inc.
Lines: 28

In article <1992Oct30.195251.9573@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>I tend to equate grounding with the folk notion of "knowing what you're
>talking about."  If I talk about something I haven't directly experienced,
>like marijuana, I could be accused of "not knowing what I'm talking about";
>in this case my statements could be meaningful, but ungrounded.
>
>I assume grounding is most important to those (like Harnad and George Lakoff)
>who see meaning as derived from direct real-world experience.  I suppose it
>wouldn't much interest those who identify "meaning" merely with logical sense
>or reference, or with the magic of causal properties.  On the other hand, I
>don't know how these folks would explain the dubiousness of my comments
>about the taste of marijuana, since there is presumably nothing wrong with
>my sense, reference, or causal properties.

There are two not-unrelated problems:

(1) There is, in a purely information-theoretic sense, much more information
in actually tasting marijuana than in anything or everything you have been
told about it.  So you *can't* have comparable grounding compared to someone
who has tasted it.

(2) Humans are not so constructed as to be able to take verbal descriptions,
however complete and accurate, and translate them into the equivalent memory
structures generated for sensory experiences.  So even if somebody could
tell you exactly what it tasted like, you couldn't fully understand the
explanation.  (This *is* a problem with your causal properties; one which
an AI need not share.)



