Newsgroups: comp.ai.philosophy,comp.ai,comp.robotics,comp.cog-eng,sci.cognitive,sci.psychology
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!rsg1.er.usgs.gov!stc06.ctd.ornl.gov!fnnews.fnal.gov!uwm.edu!news.alpha.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!ix.netcom.com!netcom.com!departed
From: departed@netcom.com (just passing through)
Subject: Re: Grounding Representations: CONFERENCE May 15 London
Message-ID: <departedD5xB4A.544@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <harnad-1503952148320001@sm1.psy.soton.ac.uk> <rwhite.795319853@superior>
Date: Fri, 24 Mar 1995 02:33:46 GMT
Lines: 59
Sender: departed@netcom11.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:26226 comp.ai:28423 comp.robotics:19193 comp.cog-eng:3064 sci.cognitive:6959 sci.psychology:38860

In article <rwhite.795319853@superior>,
Robert White <rwhite@superior.carleton.ca> wrote:
>In <harnad-1503952148320001@sm1.psy.soton.ac.uk> harnad@ecs.soton.ac.uk (Stevan Harnad) writes:
>
>[.]
>>Intelligence is that computer programs use symbols that are arbitrarily
>>interpretable (see Searle, 1980 for the Chinese Room and Harnad, 1990
>>for the symbol grounding problem). We could, for example, use the word
>>"apple" to mean anything from a "common fruit" to a "pig's nose". All
>>the computer knows is the relationship between this symbol and the
>>others that we have given it. 
>
>
>Systems theory provides a grounded approach to solving this problem
>and I have seen the same 'signification' models used within
>metamodeling as I have within Semiotics. I was especially surprised to
>see almost the exact same model used by Roland Barthes in his book
>entitled Mythologies. The structure of the model is tripartite and
>each signal is generated to create a 'signifier' and a 'signified'
>semiotic meaning. The signals are always moving back and forth from a
>'signification' to a 'signifier' along to being labeled as
>'signified'. I have just bastardized the model, but I have essentially
>given it to you in appropriate form. Moreover, the structure of the
>model is perhaps the most interesting aspect because of the steps in
>function of meaning and the slope of the meaning. If I can use those
>descriptors.
> 
>
>ps.... If you want the Information processing ref that I have I'll
>find it. Additionally, I have seen this same model described in
>literature on creativity, mythology, Systems theory, metamodeling,
>cybernetics, and I have even seen it in Pribram's work.

Wouldn't a symbol be 'grounded' if it were in a 'rich' and 'complete'
continuum?  That is, it is grounded if it has some relation to _every other_
_possible_ symbol within that symbolic universe?

To paraphrase, a symbol gains an 'interior' of meaning if it reflects its
entire universe -- there is still nothing inside, but it becomes a virtual
something ... it gets an interior by having a complete exterior.

A symbol is not just an arbitrary token when it crosses the line into
holding its whole world, by reference.

(A token, by contrast, is something that has been purged of almost all
 of its reference, leaving only one or a few references.)

tell me what you think.

>[.]
>   ----------------------------------------- Carleton University ----------
>               Robert G. White               Dept. of Psychology   
>                                             Ottawa, Ontario. CANADA
>   INTERNET ADDRESS ----- rwhite@ccs.carleton.ca ------------------- E-MAIL
>   ------------------------------------------------------------------------

-- Richard Wesson
(departed@netcom.com)

