From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!uunet!usc!wupost!uwm.edu!ogicse!plains!plains.NoDak.edu!vender Wed Sep 16 21:23:41 EDT 1992
Article 6933 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!uunet!usc!wupost!uwm.edu!ogicse!plains!plains.NoDak.edu!vender
>From: vender@plains.NoDak.edu (Does it matter?)
Newsgroups: comp.ai.philosophy
Subject: Re: Consciousness
Summary: Consciousness is a trick
Message-ID: <20390@plains.NoDak.edu>
Date: 16 Sep 92 03:34:01 GMT
Article-I.D.: plains.20390
References: <1992Sep6.010048.1@watt.ccs.tuns.ca> <18eh2uINNt6v@agate.berkeley.edu> <BuDr7y.1LA@usenet.ucs.indiana.edu>
Sender: Unknown@plains.NoDak.edu
Organization: NA
Lines: 16
Nntp-Posting-Host: plains.nodak.edu

As human beings, we have sensory inputs from the real world.  This makes
  us aware of our surroundings.  We are self-aware because we have nerve
  endings throughout our bodies.  Consciousness is merely our model
  for seperating ourselves from the rest of existance (presumably to
  make it easier to keep track of spatial data or something).

Now that I've made that statement (which I would like feedback on):
  In an earlier thread, it was said that a computer based AI could
  not be conscious because its inputs lacked grounding in the real
  world.  The question is, what if we grounded it in a computer
  system (say a UNIX system on the Internet).  Granted it may
  be an incomprehensible intelligence, but would it qualify
  as having its inputs solidly grounded in its environment
  (and thus avoid that argument)?

--Brad


