From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Wed Feb  5 11:55:52 EST 1992
Article 3363 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Multiple Personality Disorder and Strong AI
Message-ID: <1992Feb1.201537.13207@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <kokp5aINNiuu@agate.berkeley.edu>
Date: Sat, 1 Feb 1992 20:15:37 GMT

In article <kokp5aINNiuu@agate.berkeley.edu> jvsichi@ocf.berkeley.edu (John Sichi) writes:

[much deleted for the sake of bandwidth savings]

>    Now, according to a dualist, consciousness is an independent
>property of my being as a whole, even though mind and brain are tightly
>coupled.  However, a strong AI proponent would say this mode of being I'm
>calling consciousness is an emergent property of the activity of my
>brain.  A researcher in this camp, perhaps, hopes to be able to come up
>with a set of criteria which could be applied to a program, or
>artificial neural network, or Turing machine, or some other
>attempted implementation/description of a mind, and which could be used
>to decide whether the consciousness property were present.  I'm going to
>focus on neural networks (with no distinction between artificial and
>biological--we'll disregard Searlean "life" mumbo-jumbo) because
>    (a) the human brain is one
>    (b) strong AI stands or falls on any one of the above mentioned
>        approaches, since there exist transformations between them
>    (c) I haven't figured out how to phrase my argument to deal with
>        anything but NN's.
>
>    Here's my problem.  Suppose the following, a restatement of the
>above, is true.
>
>Proposition:
>
>{    There exists a set of criteria (C$) such that if a neural network
>meets C$, then a unique consciousness associated with that network's
>activity will exist as long as C$ continues to apply. }
>
>    Presumedly, these criteria would deal with the overall organization
>and initial state of the network, rather than prescribing exact
>connections, weights, thresholds, transfer functions, etc.  I don't
>think it matters.
>
>    Consider some neural network, composed of N nodes, which meets C$.
>Further, this network is sufficiently robust so that if any one the
>neural processing units and all of its connections are removed, C$
>continues to apply to the remaining N-1 nodes (perhaps with a slight
>change in the associated consciousness).
>
>    Here's the catch:  Even if the complete network is not subjected to
>such a lesion, any subnetwork of N-1 nodes meets C$ at the same time as
>the entire network does, meaning there should actually be N+1
>consciousnesses in existence!  (One emerging from the activity of the
>complete network, and one emerging from each of the partial networks).
>Of course, if the network were suitably robust, even more
>consciousnesses may coexist (as should be the case with the brain).
>
>    I find this conclusion absurd.  Admittedly, if all of these other
>consciousnesses were around, I would have no way of knowing it (nor
>would they be aware of what I like to call "me").  But really...
>
>    Some possible flaws in the reasoning:
>   
>    * I have a misconception about the strong AI position.
>
>    * My notion of consciousness is hogwash.
>
>    * Consciousnesses can somehow be "superimposed" to form a single
>      subject.
>
>    * Consciousness does not depend on the network as a whole, but some
>      particular structure within it.  Still, I think this falls to a
>      similar axe of absurdity.
>
>    * The connection of the remaining node to the other N-1 changes
>      things somehow, inhibiting a separate consciousness.
>
>    * Others?
>
>    Criticism, constructive or destructive, would be appreciated.
>
>John Sichi

Well, John, this is certainly a novel criticism, and one which, on reflection,
does seem terribly problematic to me.  But then again, I'm biased.  What
do folks in the pro-Strong-AI camp have to say?

- michael




