Newsgroups: comp.ai.philosophy,comp.robotics,comp.cog-eng,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!gatech!news-feed-1.peachnet.edu!news.netins.net!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Grounding Representations: ("Grounding" is the wrong word)
Message-ID: <D82JwI.Gyt@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <departedD5xB4A.544@netcom.com> <D7pIGq.Knp@gpu.utcc.utoronto.ca> <D7LrKB.76u@spss.com> <3o0f74$l97@percy.cs.bham.ac.uk>
Date: Thu, 4 May 1995 19:38:41 GMT
Lines: 89
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:27573 comp.robotics:20408 comp.cog-eng:3131 sci.cognitive:7504

In article <3o0f74$l97@percy.cs.bham.ac.uk>,
Aaron Sloman <A.Sloman@cs.bham.ac.uk> wrote:

--a long and interesting article, much of which I agreed with.  
A couple of things, however, deserve further comment.

>Note that this is not a claim about the previous causal links or
>future possible links. I am talking about what makes it possible for
>you NOW to think about Julius Caeser, the square root of 1000, what
>you are going to have for dinner tomorrow, whether there will be
>peace on earth by the year 3000, why dinosaurs became extinct, etc.
>etc. My claim is that existing causal links play very little role in
>determining the semantic content of most of the information stored
>in your brain (or mind) right now.
>
>At this moment there's a huge amount of information about your
>immediate environment, about things remote in time and space, about
>generalisations, legal rules, family relationships the grammar of
>the language(s) talked in your culture, etc. The causal links
>between the fine detail of all that information and the
>corresponding bits of the environment are either very weak or
>effectively non-existent at the moment, except for those aspects of
>your internal state that relate to your immediately perceived (and
>acted on) current environment.
>
>E.g. most of the other bits of the world could change in some way
>without affecting your representations of those bits, and vice
>versa.
>
>That's why I claim that the mechanisms and the structure of
>representing states are more important than their causal links as a
>basis for your ability (at any given time) to think about their
>referents.

I don't see that you've supported your claim about the relative importance
of structure and "causal links", except by making other claims.  

I completely agree about the complexity and importance of cognitive
structures, and I'm willing to believe that much of meaning relates to
this web of structure and only indirectly to experience.  However, the
sensory and motor information we possess is *also* complex and important,
and I don't think you've shown that it's smaller or less complex than
cognitive structures.

We tend to forget the volume of our sensorimotor knowledge only because 
we learned it all in infancy and have forgotten the process (and because
for the purposes of AI the problems of creating cognitive structures
is far more tractable).

>However, Tarskian semantics *obviously* cannot identify a *unique*
>model for a given set of representations. For, if any M is a
>(tarskian) model for S, and M' is isomorphic with [M], then M' is also
>a (tarskian) model for S even if M' is millions of light years away
>in some other galaxy. That's where causal links can come in. If S is
>embedded in a mechanism providing a web of causal links with the
>environment, then that can (sometimes) be a basis for eliminating
>M' as referent

This kind of problem arises, IMHO, from looking at the problem backwards.
Oversimplifying, the communicative problem humans face is not "Given some
statements, how do I make sure they refer to just one thing in the real
world?" but "Given some stuff in the real world, how do I talk about it?"

The paragraph above, which brings in the real world only as a sort of
afterthought to reduce ambiguity, seems to me to fall prey to this
confusion.  Suppose (to use an example from your posting) there is an exact
duplicate of the Eiffel Tower in another galaxy.  Is this any problem for
statements we may make about the Eiffel Tower?  Only if you start with the
statements and only then worry about what they may refer to.  Start with the
Eiffel Tower (the one in Paris) and make statements about it, and there's
simply no problem.  

Searle in particular likes to talk as if intentionality were some kind of
magic, as if it were a marvelous property of brains that they can produce a
transcendental relationship called "reference" between statements and the
world, a relationship denied to computational entities.  Such magic doesn't
exist; it just isn't needed.

To put it another way: we create a robot sightseer, with detailed knowledge
of French, Paris, and human history, let it wander about Paris, and then
interview it about the Eiffel Tower.  Like us, its statements about the
Eiffel Tower derive partly from direct experience (it saw it, it took
pictures for its CD-ROM scrapbook, it rode to the top, it plugged itself in
in an outlet in the restaurant) and indirect knowledge.  It has, in your
terms, both "causal links" and "structure".  Like us, it has no way of
knowing if its statements are coincidentally true about the Eiffel Tower in
Andromeda.  Like us, it isn't bothered by this; it can answer our questions
without communication problems.  What else is it missing?  What can we do
that it cannot?  I haven't heard any good answers to these questions.
