From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Tue Feb 11 15:25:41 EST 1992
Article 3579 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Newsgroups: comp.ai.philosophy
Subject: Re: Multiple Personality Disorder and Strong AI
Summary: Something thinks, therefore there is something
Keywords: consciousness,functionalism
Message-ID: <1992Feb7.162533.4653@cs.yale.edu>
Date: 7 Feb 92 16:25:33 GMT
References: <kokp5aINNiuu@agate.berkeley.edu> <1992Feb4.035646.11687@cs.yale.edu> <1992Feb4.214433.9121@psych.toronto.edu>
Sender: news@cs.yale.edu (Usenet News)
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
Lines: 69
Nntp-Posting-Host: atlantis.ai.cs.yale.edu


  In article <1992Feb4.214433.9121@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
  >In article <1992Feb4.035646.11687@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
  >
  >>To get back to the puzzle: Consciousness is not a mass phenomenon.  If
  >>the whole network maintains a model of itself as conscious, it is
  >>conscious.
  >
  >Um...so what's lookin' at the model?

No one's looking at the model and "understanding" what it says.
(Please don't picture a homunculus following instructions on what
quale to experience next.)  It's like asking, What's looking at the
information from a frog's eye?  The information is transmitted from
one module to another.  In general, inferences based on a model of X
cause a system to behave appropriately with respect to X; inferences
based on a model of the system itself cause it to behave appropriately
with respect to itself.  There's nothing metacosmic here; a snake
doesn't eat its own tail, because it has a model of where the parts of
its own body are and how to behave toward them.  (Just as it might
have a model of the whereabouts of a mouse it is stalking.)  What's
looking at these models in the case of a snake are the neural circuits
that control predation.

  >And how does it *know* it's got one?

It doesn't have to know.  You in particular don't even believe you
have such a model, much less *know* you have one.

  >Honestly, when I see explanations like the above, I want to jump up and
  >down and yell "The Emperor has no clothes!"  Recursion and self-reflection
  >are *not* explanations, Hofstadter to the contrary.  

Please: The key idea is not that the system has a model of itself, but
that it has a model of itself *as conscious.*  A PC might have a model
of the furniture in its environment, in which it models itself as a
piece of furniture.  It wouldn't be conscious on that account.  Note
also that there is a subtle shift in the meaning of "self" in the
middle of the sentence "the system has a model of itself as
conscious."  What we actually want to say is that

    The system P, as an information-processing system, has a model in
which there is an symbol "S".
    In the model, S has the attributes of consciousness (qualia, free
will, etc.)
    The object in the world that the symbol "S" tracks most closely is
P.
    (So the model is a model of P, owned by P.)

As before, you get a choice:

   S doesn't really exist [sorry, Rene']
   S is really P

(By the way, Minsky's paper "Matter, Mind, and Models" from the
sixties anticipates most of this theory.  It's not for nothing that
Dennett cites Minsky so much in his recent book.)

Note that Hofstadteresque recursion and self-reflection need not be
involved, because the model need not mention its own presence.  (In
fact, it won't.)  

However, I acknowledge that the theory does depend on a
"correlationist" theory of reference and meaning.  That is (as we've
hashed out before), it depends on meaning being objectively given by
correlations between model and thing modeled, and not being dependent
on "original intentionality" or the like.

                                             -- Drew McDermott


