From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!agate!stanford.edu!CSD-NewsHost.Stanford.EDU!scottie.Stanford.EDU!kave Sun May 31 19:04:43 EDT 1992
Article 5968 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!agate!stanford.edu!CSD-NewsHost.Stanford.EDU!scottie.Stanford.EDU!kave
>From: kave@scottie.Stanford.EDU (Kave Eshgi)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <1992May29.010952.6850@CSD-NewsHost.Stanford.EDU>
Date: 29 May 92 01:09:52 GMT
Article-I.D.: CSD-News.1992May29.010952.6850
References: <1992May26.022413.14151@mp.cs.niu.edu> <1992May26.031148.27458@news.media.mit.edu> <21986@castle.ed.ac.uk>
Sender: news@CSD-NewsHost.Stanford.EDU
Organization: Robotics Department, Stanford University, Ca. USA
Lines: 49

In my view, the symbol grounding problem has nothing to do with
intelligence, consciousness or the Turing test. You can pose the same
problem is a much simpler context.

The question is: what does a symbol, say 'box4', "mean" when used in a
computer program. The answer is, as a symbol in a program, it means
nothing. But when the program is loaded into a computer which runs it,
and when this computer is connected to the outside world via sensor
and effectors in a specific way, then it can denote a specific object
in the real world. To make this concrete, imagine the following
scenario:

There is a warehouse in which there are a lot of boxes, and each box
has a number printed on it, no two boxes having the same number. There
is a robot roaming around in this warehouse, which has sufficient
visual capability to recognise boxes and read the numbers on them. It
has the ability to move from one boxe to the next in a systematic way.
It can also lift a box and move it outside the warehouse. (Note that
all these capabilities are possible with today's technology, no
science fiction here).

The robot is connected to a remote terminal on which you type: bring
box 4. Given the right software inside the robot, and given the right
sensors and motor abilities, the robot will go and scan all the boxes,
find box number 4 and bring it out of the warehouse. Suppose the robot
does this by parsing your command, adopting the goal get( 'box 4'),
which it holds internally.

Now my position is this: if it is the case that whenever the robot has
the goal get('box n') it finds box number n and brings it out, we can
legitimately say that the symbol 'box 4' inside the robot denotes a
specific box in the warehouse.  However, this grounding of the symbol
'box 4' is _not_ determined solely by the program running in the
robot, it is determined by the program, the hardware platform on which
it is running, the sensors and effectors and the way they are
connected to the hardware, and the properties of the warehouse itself
(there are boxes of the type the robot can recognise, etc).

As I said, this has nothing to do with intelligence, consciousness or
the Turing test. This type of symbol grounding is very commonplace for
any situated automaton.

The fact that symbols inside computers can have meaning, given the
right sensory motor capabilities and in the context of environments
with certain properties, is quite obvious to me, as the above scenario
demonstrates. Formalising this notion of meaning in a satisfactory way
is the challenge.

Kave Eshghi


