From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!cam Tue Jun  9 10:06:03 EDT 1992
Article 6011 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: lights on, nobody home
Message-ID: <22134@castle.ed.ac.uk>
Date: 1 Jun 92 16:38:08 GMT
References: <5245@dsacg3.dsac.dla.mil>
Organization: Edinburgh University
Lines: 37

In article <5245@dsacg3.dsac.dla.mil> nba1836@dsacg3.dsac.dla.mil (Ken Burch) writes:

>But how would our approach change if we believed that even in our human
>selves there was really nobody home, that our innermost self -- the ego,
>or soul, or "I" that says "I have a mind" -- was actually just a practical
>illusion supported by habitual wrong thinking and bad semantics.

It could be a lot stronger than this. As Humphrey has suggested (but
not in these terms) we need to be able to use each other, negotiate,
placate, threaten, predict, etc.. Unfortunately, someone lost the user
manual for homo sapiens. No problem, however, since, like the
Macintosh, homo sapiens comes equipped with a simple user metaphor
which any fool can understand. In the case of the Mac it's the desktop
metaphor. In the case of homo sap it's the conscious mind, a neat user
illusion that facilitates our understanding of how to use other people
(e.g Dennett's intentional stance); and which through internalised
language games we also use as a way of understanding and using
ourselves (e.g. accomplishing long term goals like farming). In other
words, the user illusion of the conscious self is such a useful
fiction that we have gone to some lengths to "program" ourselves to
behave in accordance with it. It's not easy, and some people can't
quite manage it, despite the fact that those who fail suffer the
horrible fate of psychotherapy :-)

>Now if there is some soul, or ghost in the machine, seated inside our
>minds, if there really is someone home, we are going to have to be able
>to prove/disprove that fact before we can seriously expect to prove/disprove
>whether there is someone home in our AI entities, of course.

Just imagine introducing a bunch of medieval philosophers to the
delights of word processing on the Mac, and then trying to convince
them there wasn't "really" a little wastepaper basket (trash can)
inside the computer! How on earth could you prove it to them?
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


