From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Wed Feb  5 11:56:40 EST 1992
Article 3444 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Multiple Personality Disorder and Strong AI
Message-ID: <1992Feb4.035646.11687@cs.yale.edu>
Summary: The computational theory of consciousness
Keywords: consciousness,functionalism
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <kokp5aINNiuu@agate.berkeley.edu>
Date: Tue, 4 Feb 1992 03:56:46 GMT
Lines: 59

  In article <kokp5aINNiuu@agate.berkeley.edu> jvsichi@ocf.berkeley.edu (John Sichi) writes:

  >  However, a strong AI proponent would say this mode of being I'm
  >calling consciousness is an emergent property of the activity of my
  >brain.  
    ....
  >    Here's my problem.
  [Puzzle about conscious neural net with N neurons being thought of
   as N conscious networks with N-1 neurons.]
  >
  >    Here's the catch:  Even if the complete network is not subjected to
  >such a lesion, any subnetwork of N-1 nodes meets C$ at the same time as
  >the entire network does, meaning there should actually be N+1
  >consciousnesses in existence!  (One emerging from the activity of the
  >complete network, and one emerging from each of the partial networks).
  >
  >    I find this conclusion absurd.  Admittedly, if all of these other
  >consciousnesses were around, I would have no way of knowing it (nor
  >would they be aware of what I like to call "me").  But really...
  >
  >    Some possible flaws in the reasoning:
  >   
  >    * I have a misconception about the strong AI position.

Bingo.

There is no one strong-AI position.  One of the least plausible
strong-AI positions is that conjured up by the word "emergent."  The
idea seems to be that at some degree of complexity consciousness will
just happen.  We don't need a model of how it happens; the Turing Test
will show us that it did happen, and that's the end of the story.

It is more in the spirit of the computationalist enterprise to believe
that consciousness is the result of a particular computational
structure.  It seems to me that the correct move (if a little devious)
is that espoused by Dennett: To stop looking for actual qualia in the
system, and look instead for the kind of self-model a system would
have to have to believe that it had mental states with qualia.  If you
accept this move, then it isn't hard to invent models of this kind.
The hard part is accepting that the quale of vivid green is
essentially an internal fiction.  You gradually come to accept it by
realizing that (a) there's not going to be anything else in our
universe that it could be; (b) the inescapable feeling that my
sensations *really do* have ineffable qualities is due to the fact
that I can't get out of the story my brain is making up.  Indeed, the
entity "I" is just a character in that story.  I have to experience
qualia for the same reason that Winnie the Pooh has to be a bear.

To get back to the puzzle: Consciousness is not a mass phenomenon.  If
the whole network maintains a model of itself as conscious, it is
conscious.  There is no problem with it having many models of itself.
(One might see some technical difficulty with neural nets implementing
mental models, but presumably they can be overcome!)

A consequence of this theory, as I've pointed out before, is that
simple creatures are not conscious at all.  Hence we needn't worry
about thermostats, or even simple animals, experiencing anything.

                                             -- Drew McDermott


