From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!sun-barr!olivea!uunet!seismo!lll-winken!iggy.GW.Vitalink.COM!widener!dsinc!bagate!asi!disc.dla.mil!dsacg3.dsac.dla.mil!nba1836 Sun May 31 19:04:53 EDT 1992
Article 5987 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!sun-barr!olivea!uunet!seismo!lll-winken!iggy.GW.Vitalink.COM!widener!dsinc!bagate!asi!disc.dla.mil!dsacg3.dsac.dla.mil!nba1836
>From: nba1836@dsacg3.dsac.dla.mil (Ken Burch)
Newsgroups: comp.ai.philosophy
Subject: lights on, nobody home
Message-ID: <5245@dsacg3.dsac.dla.mil>
Date: 28 May 92 21:41:10 GMT
Organization: yes
Lines: 50


In article <1992May27.043004.28647@Princeton.EDU> Stevan Harnad writes:

> Where is the big step taken? Intellectually, it would be in designing
> the purely symbolic oracle (if that could be done); the step from there
> to building a real robot along the lines specified by the oracle would
> be a much less profound one -- intellectually. But if the question is
> about the properties of the computational oracle vs the real robot, the
> difference would be like night and day (as in the case of the
> computational and real solar system), with nobody home in the
> virtual-TTT (hence actually just TT) computational model and somebody
> home in the real TTT robot.

So in the pursuit of a real robot, or an entity that can pass the TTT 
(or TT, for that matter), we are looking to see if "somebody is home".
If a thing is conscious, there must be somebody "inside" who is being
conscious, experiencing the sensation of consciousness.  We want to be
convinced that someone is home in our AI creations, as we certainly seem
convinced that someone is "home" in our own personal, individual cases.

But how would our approach change if we believed that even in our human
selves there was really nobody home, that our innermost self -- the ego,
or soul, or "I" that says "I have a mind" -- was actually just a practical
illusion supported by habitual wrong thinking and bad semantics.
Just assume for now that most people felt this way.  The purpose of the
TT and TTT might then be to identify AI entities that _feel_ as if they 
are something that possesses consciousness, that someone is home, the
same way that we _feel_ as if someone is home in our own mind/body.
What we might then be content to answer is whether the AI entity experiences
a similar sensation of self-hood that we ourselves experience, or whether
the inner workings of the AI entity produce the same vacuous sense of
ego that our own brains produce.  This might take some of the emotional
edge off of the endless debates concerning whether someone is home or not,
since we wouldn't think that someone is really home even in our own human
case, and our sacred cow of an ego wouldn't be so sacred.

Now if there is some soul, or ghost in the machine, seated inside our
minds, if there really is someone home, we are going to have to be able
to prove/disprove that fact before we can seriously expect to prove/disprove
whether there is someone home in our AI entities, of course.  In the mean
time, it seems interesting enough to pursue developing an AI entity that just
_feels_ as if it has an individual ego, or feels anything for that matter.
As it is, I'm afraid that if we don't create an AI entity that displays
all of our own popular delusions and neuroses, we won't recognize it as
being truly conscious, and we might be overlooking something.

Just my $0.02...


Ken


