From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Feb 11 15:25:47 EST 1992
Article 3588 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Multiple Personality Disorder and Strong AI
Message-ID: <1992Feb7.224324.7502@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb6.132111.9087@oracorp.com>
Date: Fri, 7 Feb 1992 22:43:24 GMT

In article <1992Feb6.132111.9087@oracorp.com> daryl@oracorp.com writes:
>Michael Gemar writes: (in response to Drew McDermott)
>
>>> To get back to the puzzle: Consciousness is not a mass phenomenon.  If
>>> the whole network maintains a model of itself as conscious, it is
>>> conscious.
>
>> Um...so what's lookin' at the model?
>
>I took Drew's statement as meaning the network.
>
>> And how does it *know* it's got one?
>
>Do you mean "How does the network know that the model is a model of
>itself?" It doesn't figure it out, any more than you figure out that
>you are conscious; it is constructed so that it knows certain
>primitive facts.

So, how does it *know* it knows these facts?  Is it *conscious* of
these facts? 

>> Honestly, when I see explanations like the above, I want to jump up
>> and down and yell "The Emperor has no clothes!"  Recursion and
>> self-reflection are *not* explanations, Hofstadter to the contrary.
>
>Hofstadter didn't invent the idea that self-reflection is an important
>part of consciousness; it is quite a common belief that the lack of
>self-reflection prevents computers (and perhaps some animals from
>being conscious). I take you that you don't think it is important?

I don't think it's an explanation.  And I think most animals have
experiences/qualia, which is all I mean by consciousness.


>Anyway, Hofstadter simply was showing that, contrary to what you may
>think at first, it *is* possible, using recursion, to get
>self-reflection in a computer program. It may seem that to get
>self-reflection, a computer program would need an copy of itself
>(which would itself require a copy of a copy, and so on ad infinitum).
>Recursion allows a way out of the infinite regress; the seemingly
>infinite levels of copies within copies, etc. can be encoded in a
>finite algorithm.

Self-reflection ain't consciousness.  If it were, two mirrors properly
aligned would be conscious... 

- michael




