Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!gatech!usenet.eel.ufl.edu!news.ultranet.com!news.sprintlink.net!news.indirect.com!marty
From: marty@indirect.com (Marty Stoneman)
Subject: Re: Grounding Representations
Message-ID: <D89s5r.Hzs@indirect.com>
Sender: usenet@indirect.com (Internet Direct Admin)
Organization: Internet Direct, indirect.com
Date: Mon, 8 May 1995 17:20:12 GMT
X-Newsreader: TIN [version 1.2 PL2]
Lines: 32

In article <D82JwI.Gyt@spss.com> Mark Rosenfelder wrote:

[snip --- snip]

: To put it another way: we create a robot sightseer, with detailed knowledge
: of French, Paris, and human history, let it wander about Paris, and then
: interview it about the Eiffel Tower.  Like us, its statements about the
: Eiffel Tower derive partly from direct experience (it saw it, it took
: pictures for its CD-ROM scrapbook, it rode to the top, it plugged itself in
: in an outlet in the restaurant) and indirect knowledge.  It has, in your
: terms, both "causal links" and "structure".  Like us, it has no way of
: knowing if its statements are coincidentally true about the Eiffel Tower in
: Andromeda.  Like us, it isn't bothered by this; it can answer our questions
: without communication problems.  What else is it missing?  What can we do
: that it cannot?  I haven't heard any good answers to these questions.

IMHO, unless the "primitives" of the robot's cognitive structures are 
arranged like those of us humans, so they may be combined and re-combined 
in almost limitless ways to represent any real or imaginary worlds we 
choose, the robot will not be able converse with us about such things as 
a huge red ball bouncing about on a planet like Mars in some other 
galaxy.  We can communicate about nothing the robot cannot PROVIDED its 
"metaphysics", its primitives, etc., are just like ours.  Going further, 
I believe natural language to be "grounded" upon our internal 
representations (made from such primitives); and IF we build our robots 
similarly, we shall understand each other about as well as humans do.
	P.S. Blame evolution for giving us the sorts of primitives USEFUL 
in internally representing out-there reality.

					Marty Stoneman
					marty@indirect.com

