From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose Wed Sep 16 21:23:46 EDT 1992
Article 6941 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose
Newsgroups: comp.ai.philosophy
Subject: Grounding
Message-ID: <1992Sep16.203451.5162@spss.com>
>From: markrose@spss.com (Mark Rosenfelder)
Date: Wed, 16 Sep 1992 20:34:51 GMT
Sender: news@spss.com (Net News Admin)
References: <18eh2uINNt6v@agate.berkeley.edu> <BuDr7y.1LA@usenet.ucs.indiana.edu> <20390@plains.NoDak.edu>
Organization: SPSS Inc.
Lines: 14

In article <20390@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?) writes:
>  In an earlier thread, it was said that a computer based AI could
>  not be conscious because its inputs lacked grounding in the real
>  world.  The question is, what if we grounded it in a computer
>  system (say a UNIX system on the Internet).  Granted it may
>  be an incomprehensible intelligence, but would it qualify
>  as having its inputs solidly grounded in its environment
>  (and thus avoid that argument)?

What folks who talk about "grounding in the real world" mean, I believe, is
that concepts acquire their meaning by virtue of an immense experience
of direct physical interaction with the real world.  This would not be
the case for an AI (merely) running under Unix and/or connected to the
Internet, so no, such a system wouldn't be grounded.


