From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Jun  9 10:07:13 EDT 1992
Article 6101 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: lights on, nobody home
Organization: Department of Psychology, University of Toronto
References: <5245@dsacg3.dsac.dla.mil> <22134@castle.ed.ac.uk>
Message-ID: <BpDq6t.Lrp@psych.toronto.edu>
Date: Fri, 5 Jun 1992 15:40:03 GMT

In article <22134@castle.ed.ac.uk> cam@castle.ed.ac.uk (Chris Malcolm) writes:

> As Humphrey has suggested (but
>not in these terms) we need to be able to use each other, negotiate,
>placate, threaten, predict, etc.. Unfortunately, someone lost the user
>manual for homo sapiens. No problem, however, since, like the
>Macintosh, homo sapiens comes equipped with a simple user metaphor
>which any fool can understand. In the case of the Mac it's the desktop
>metaphor. In the case of homo sap it's the conscious mind, a neat user
>illusion that facilitates our understanding of how to use other people
 ^^^^^^^^
What "perceives" this illusion if not a "self"?  You can't use terms that
require a subjective agent to claim that subjective agency doesn't
really exist. 

>(e.g Dennett's intentional stance); and which through internalised
>language games we also use as a way of understanding and using
>ourselves (e.g. accomplishing long term goals like farming). In other
>words, the user illusion of the conscious self is such a useful
>fiction that we have gone to some lengths to "program" ourselves to
>behave in accordance with it. It's not easy, and some people can't
>quite manage it, despite the fact that those who fail suffer the
>horrible fate of psychotherapy :-)

As an exercise, try recasting the above eliminating the terms that
refer to conscious, subjective agents or states (e.g, "we", "understand",
"our").  I don't see how it can be done.  

If the claim that you and others who take this tack are making is that
minds are not simple disembodiable substances, but are created by
the combination of small processes, then sure, I'll buy that.  But to
say that the resulting "self" is somehow an illusion seems to be to
be self-refuting.  Illusions don't exist without a perceiver.

- michael



>
>>Now if there is some soul, or ghost in the machine, seated inside our
>>minds, if there really is someone home, we are going to have to be able
>>to prove/disprove that fact before we can seriously expect to prove/disprove
>>whether there is someone home in our AI entities, of course.
>
>Just imagine introducing a bunch of medieval philosophers to the
>delights of word processing on the Mac, and then trying to convince
>them there wasn't "really" a little wastepaper basket (trash can)
>inside the computer! How on earth could you prove it to them?
>-- 
>Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
>Department of Artificial Intelligence,    Edinburgh University
>5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205




