From newshub.ccs.yorku.ca!torn!cs.utexas.edu!milano!cactus.org!wixer!sparky Wed Sep 23 16:54:44 EDT 1992
Article 7007 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!milano!cactus.org!wixer!sparky
>From: sparky@wixer.cactus.org (Timothy Sheridan)
Subject: Re: Grounding
Message-ID: <1992Sep22.091526.5456@wixer.cactus.org>
Organization: Real/Time Communications
References: <20522@plains.NoDak.edu>
Date: Tue, 22 Sep 92 09:15:26 GMT
Lines: 48

In article <20522@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?) w
rites:
>When I asked whether grounding an AI in a UNIX environment would
>  result in making it no longer a symbol manipulator, but a real
>  intelligence this is what I meant to ask:
>
>  Assuming that I have developed an intelligence which is capable
>  of learning from action/reaction feedback (i.e. based on which
>  actions have succeeded in the past it selects the action to
>  fullfill a need/desire/impulse) and reasoning, would its
>  inputs be sufficiently attatched to the 'real' world if
>  its inputs were various streams on a computer system?
>
>  The reason I ask this is:  Its has been said that a computer cannot
>  be sufficiently grounded in reality because the integrity of its
>  transducers cannot be proven (its inputs could be a simulator or
>  actually connected to the world).
>
>For those who will attempt to answer this:
>  Yes, I realize that an actual AI will be required to learn from
>  sensory feedback resulting from actions it takes.  This is the
>  basis of real learning.
>
>  I also realize that heuristic algorithms are merely symbol manipulators
>  and thus not 'grounded'.  The theoretical device is assumed to be
>  a neural network or other adaptive program.
>
>--Brad (who is still confused by the concept of grounding)
>


What ever 'world' the system grows-up in it will learn as much as it can from
that world.  so an inteligence that was exposed to stacks, file headers,
imput commands and human text files could in principle learn and consciously
know about its experiences.

However such a system would not have acess to the finner detailes of stubing
its toe, the physical beauty of human form or a geometric model of walking to
name a few.

It would only know that text had grammer and some level of the facts
contained within which are completly entailed within its observable structure.

It would not have as detailed a picture as we do

It would be less grounded.

Tim


