From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Feb 11 15:25:52 EST 1992
Article 3596 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Multiple Personality Disorder and Strong AI
Message-ID: <1992Feb8.202519.13187@psych.toronto.edu>
Keywords: consciousness,functionalism
Organization: Department of Psychology, University of Toronto
References: <1992Feb4.035646.11687@cs.yale.edu> <1992Feb4.214433.9121@psych.toronto.edu> <1992Feb7.162533.4653@cs.yale.edu>
Date: Sat, 8 Feb 1992 20:25:19 GMT

In article <1992Feb7.162533.4653@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>  In article <1992Feb4.214433.9121@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>  >In article <1992Feb4.035646.11687@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>  >
>  >>To get back to the puzzle: Consciousness is not a mass phenomenon.  If
>  >>the whole network maintains a model of itself as conscious, it is
>  >>conscious.
>  >
>  >Um...so what's lookin' at the model?
>
>No one's looking at the model and "understanding" what it says.
>(Please don't picture a homunculus following instructions on what
>quale to experience next.)  It's like asking, What's looking at the
>information from a frog's eye?  The information is transmitted from
>one module to another.  In general, inferences based on a model of X
>cause a system to behave appropriately with respect to X; inferences
>based on a model of the system itself cause it to behave appropriately
>with respect to itself.

But this is *not* all there is to consciousness.  You are ignoring
the phenomenal aspect, which isn't captured at all...

[stuff deleted]


>Please: The key idea is not that the system has a model of itself, but
>that it has a model of itself *as conscious.*  A PC might have a model
>of the furniture in its environment, in which it models itself as a
>piece of furniture.  It wouldn't be conscious on that account.  Note
>also that there is a subtle shift in the meaning of "self" in the
>middle of the sentence "the system has a model of itself as
>conscious."  What we actually want to say is that
>
>    The system P, as an information-processing system, has a model in
>which there is an symbol "S".
>    In the model, S has the attributes of consciousness (qualia, free
>will, etc.)
>    The object in the world that the symbol "S" tracks most closely is
>P.
>    (So the model is a model of P, owned by P.)

What on earth does it mean to say "In the model, S has the *attributes
of consciousness*"?!  This statement tells us *nothing* about *why*
those attributes produce their effects, and the attributes of being
a chair don't.  This position merely *assumes* that which we are trying
to *explain*.  The Emperor is *still* naked...
 

- michael




