From newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!usenet.coe.montana.edu!news.u.washington.edu!plains!plains.NoDak.edu!vender Wed Sep 23 16:54:40 EDT 1992
Article 7000 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!usenet.coe.montana.edu!news.u.washington.edu!plains!plains.NoDak.edu!vender
>From: vender@plains.NoDak.edu (Does it matter?)
Newsgroups: comp.ai.philosophy
Subject: Grounding
Summary: Explaining my question
Message-ID: <20522@plains.NoDak.edu>
Date: 21 Sep 92 02:19:25 GMT
Sender: Unknown@plains.NoDak.edu
Organization: North Dakota Higher Education Computing Network
Lines: 27
Nntp-Posting-Host: plains.nodak.edu

When I asked whether grounding an AI in a UNIX environment would
  result in making it no longer a symbol manipulator, but a real
  intelligence this is what I meant to ask:

  Assuming that I have developed an intelligence which is capable
  of learning from action/reaction feedback (i.e. based on which
  actions have succeeded in the past it selects the action to
  fullfill a need/desire/impulse) and reasoning, would its
  inputs be sufficiently attatched to the 'real' world if
  its inputs were various streams on a computer system?

  The reason I ask this is:  Its has been said that a computer cannot
  be sufficiently grounded in reality because the integrity of its
  transducers cannot be proven (its inputs could be a simulator or
  actually connected to the world).

For those who will attempt to answer this:
  Yes, I realize that an actual AI will be required to learn from
  sensory feedback resulting from actions it takes.  This is the
  basis of real learning.

  I also realize that heuristic algorithms are merely symbol manipulators
  and thus not 'grounded'.  The theoretical device is assumed to be
  a neural network or other adaptive program.

--Brad (who is still confused by the concept of grounding)



