From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!spool.mu.edu!olivea!mintaka.lcs.mit.edu!yale!cs.yale.edu!mcdermott-drew Thu Feb 20 15:21:25 EST 1992
Article 3798 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!spool.mu.edu!olivea!mintaka.lcs.mit.edu!yale!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Newsgroups: comp.ai.philosophy
Subject: Re: Multiple Personality Disorder and Strong AI
Keywords: consciousness,functionalism
Message-ID: <1992Feb17.160108.2337@cs.yale.edu>
Date: 17 Feb 92 16:01:08 GMT
Article-I.D.: cs.1992Feb17.160108.2337
References: <1992Feb4.214433.9121@psych.toronto.edu> <1992Feb7.162533.4653@cs.yale.edu> <1992Feb8.202519.13187@psych.toronto.edu>
Sender: news@cs.yale.edu (Usenet News)
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
Lines: 37
Nntp-Posting-Host: aden.ai.cs.yale.edu


   In article <1992Feb8.202519.13187@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
   >In article <1992Feb7.162533.4653@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:

       What we actually want to say is that
   >>
   >>    The system P, as an information-processing system, has a model in
   >>which there is an symbol "S".
   >>    In the model, S has the attributes of consciousness (qualia, free
   >>will, etc.)
   >>    The object in the world that the symbol "S" tracks most closely is
   >>P.
   >>    (So the model is a model of P, owned by P.)
   >
   >What on earth does it mean to say "In the model, S has the *attributes
   >of consciousness*"?!  

I just mean that when P sees an object that sets off its color
detectors in a certain way, in the model it says "S just experienced 
something with a red quale";  when P weighs alternative courses of
action and loads the one with the highest expected utility into its
output buffer, in the model it says, "S just decided to do
such-and-such"; and so forth.

    This statement tells us *nothing* about *why*
   >those attributes produce their effects, and the attributes of being
   >a chair don't.  

You misunderstand the position.  Having a model like this does not
"produce" phenomenal consciousness; it *is* phenomenal consciousness.
Chairs don't have such self-models.

   The Emperor is *still* naked...

Transparent, maybe, but not naked....

                                             -- Drew McDermott


