From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!eff!news.oc.com!spssig.spss.com!markrose Wed Sep 23 16:54:53 EDT 1992
Article 7018 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!eff!news.oc.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Grounding
Message-ID: <1992Sep23.185020.2693@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <20522@plains.NoDak.edu>
Date: Wed, 23 Sep 1992 18:50:20 GMT
Lines: 48

In article <20522@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?) writes:
>When I asked whether grounding an AI in a UNIX environment would
>  result in making it no longer a symbol manipulator, but a real
>  intelligence this is what I meant to ask:
>
>  Assuming that I have developed an intelligence which is capable
>  of learning from action/reaction feedback (i.e. based on which
>  actions have succeeded in the past it selects the action to
>  fullfill a need/desire/impulse) and reasoning, would its
>  inputs be sufficiently attatched to the 'real' world if
>  its inputs were various streams on a computer system?
>
>  The reason I ask this is:  Its has been said that a computer cannot
>  be sufficiently grounded in reality because the integrity of its
>  transducers cannot be proven (its inputs could be a simulator or
>  actually connected to the world).

Well, if I can attempt to summarize a few months of comp.ai.philo, 
someone like Stevan Harnad would maintain that a *computer* per se-- that
is, an entirely computational machine-- can't be grounded at all; only
a robot-with-transducers can be.  Such a robot wouldn't be grounded 
simply by connecting it to a Unix system, because only *physical* inputs
to its transducers, not symbolic data, connect it to the real world.

Plenty of people disagree with Harnad.  Usually they maintain that 
the central portion of the system, excluding the transducers, is grounded;
that nothing important happens simply due to the translation from physical
to symbolic inputs.

>--Brad (who is still confused by the concept of grounding)

It has to do with the basis for meaning.  The word 'cat' doesn't mean 
anything in itself.  It does mean something for humans, because we can
associate it with our real-world experience with cats.  You can think of
grounding as a formalized version of the folk notion that you don't
really know something until you've experienced it yourself.

The question then arises, does the experience really have to be physical
and direct, or is a digitized bitstream ok?  Arguments against the latter,
I think, rely on the idea that manipulation of bytes and symbols are
inherently unable to generate meaning.  So we expand our focus till we've
taken in something (the transducers) which clearly is not an instance of
symbolic manipulation, and hope we've now lassoed meaning in our net.

If you're interested in these issues, I recommend that you ftp some of
Harnad's papers; you can get a better grasp on the subject from them than
from the net, and most of the immediate objections you might want to make
are already addressed in them.


