From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!caen!uflorida!cybernet!news Wed Sep 16 21:23:49 EDT 1992
Article 6946 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!caen!uflorida!cybernet!news
>From: justin@cybernet.cse.fau.edu (Justin Davila)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding
Message-ID: <74V8qB1w165w@cybernet.cse.fau.edu>
Date: 16 Sep 92 23:22:29 GMT
References: <1992Sep16.203451.5162@spss.com>
Sender: news@cybernet.cse.fau.edu
Organization: Cybernet BBS, Boca Raton, Florida
Lines: 22

markrose@spss.com (Mark Rosenfelder) writes:

> In article <20390@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?)
> >  In an earlier thread, it was said that a computer based AI could
> >  not be conscious because its inputs lacked grounding in the real
> >  world.  The question is, what if we grounded it in a computer
> >  system (say a UNIX system on the Internet).  Granted it may
> >  be an incomprehensible intelligence, but would it qualify
> >  as having its inputs solidly grounded in its environment
> >  (and thus avoid that argument)?
> 
> What folks who talk about "grounding in the real world" mean, I believe, is
> that concepts acquire their meaning by virtue of an immense experience
> of direct physical interaction with the real world.  This would not be
> the case for an AI (merely) running under Unix and/or connected to the
> Internet, so no, such a system wouldn't be grounded.


You're md8aking a lot of dangerous assumptions here, not the least of 
which is your idea of the "real world".  From a philosophical standpoint, 
one could argue that even if there is a "re{_al world" beyond our very 
subjective e7xperiences of it, then our conceptions are


