From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!cs.utexas.edu!swrinde!elroy.jpl.nasa.gov!ames!agate!ocf.berkeley.edu!jvsichi Wed Feb  5 11:55:48 EST 1992
Article 3357 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!cs.utexas.edu!swrinde!elroy.jpl.nasa.gov!ames!agate!ocf.berkeley.edu!jvsichi
>From: jvsichi@ocf.berkeley.edu (John Sichi)
Newsgroups: comp.ai.philosophy
Subject: Multiple Personality Disorder and Strong AI
Summary: A question for strong AI proponents
Message-ID: <kokp5aINNiuu@agate.berkeley.edu>
Date: 1 Feb 92 09:13:14 GMT
Sender: jvsichi@ocf.berkeley.edu
Organization: U.C. Berkeley Open Computing Facility
Lines: 98
NNTP-Posting-Host: tsunami.berkeley.edu


    This may not be quite the right place to post this, especially since
I'm not at all adept at philosophical analysis, which usually leads to
a communications gap and my retreating under a hail of a fortioris,
synthetic propositions, and transcendental proofs.  However, the current
discussions seem to be dealing with the problem I'm having trouble with
(strong AI and consciousness), so hopefully I can make myself
understandable.

    To begin with, I need to attempt an explanation of what the word
"conscious" means in the rest of this message, so I'll give an example
which should prove adequate.  I can type this sentence in four
different modes, all having the same result at the same speed, and
presumably all being correlated with similar neural behavior:  in the
first mode, I make myself conscious of each character and the
accompanying finger movement; in the second, I am conscious of each word
as I type it; in the third, I am only conscious of phrases as a whole
coming out; in the last, I am only thinking about the meaning and the
rest is happening unconsciously.  Now, perhaps the state of a part of
my brain (the "B" brain?) under these four modes is drastically
different, and perhaps it isn't; the point is, in each case, I exhibit
the same external behaviour, but have a decidedly different
experience in terms of the dividing line between conscious and
unconscious activity, so "consciousness" is at least an interesting
subject.  A robot hand executing the same keystrokes would be unable to
enjoy any of these modes (unless the panpsychists are correct).

    Now, according to a dualist, consciousness is an independent
property of my being as a whole, even though mind and brain are tightly
coupled.  However, a strong AI proponent would say this mode of being I'm
calling consciousness is an emergent property of the activity of my
brain.  A researcher in this camp, perhaps, hopes to be able to come up
with a set of criteria which could be applied to a program, or
artificial neural network, or Turing machine, or some other
attempted implementation/description of a mind, and which could be used
to decide whether the consciousness property were present.  I'm going to
focus on neural networks (with no distinction between artificial and
biological--we'll disregard Searlean "life" mumbo-jumbo) because
    (a) the human brain is one
    (b) strong AI stands or falls on any one of the above mentioned
        approaches, since there exist transformations between them
    (c) I haven't figured out how to phrase my argument to deal with
        anything but NN's.

    Here's my problem.  Suppose the following, a restatement of the
above, is true.

Proposition:

{    There exists a set of criteria (C$) such that if a neural network
meets C$, then a unique consciousness associated with that network's
activity will exist as long as C$ continues to apply. }

    Presumedly, these criteria would deal with the overall organization
and initial state of the network, rather than prescribing exact
connections, weights, thresholds, transfer functions, etc.  I don't
think it matters.

    Consider some neural network, composed of N nodes, which meets C$.
Further, this network is sufficiently robust so that if any one the
neural processing units and all of its connections are removed, C$
continues to apply to the remaining N-1 nodes (perhaps with a slight
change in the associated consciousness).

    Here's the catch:  Even if the complete network is not subjected to
such a lesion, any subnetwork of N-1 nodes meets C$ at the same time as
the entire network does, meaning there should actually be N+1
consciousnesses in existence!  (One emerging from the activity of the
complete network, and one emerging from each of the partial networks).
Of course, if the network were suitably robust, even more
consciousnesses may coexist (as should be the case with the brain).

    I find this conclusion absurd.  Admittedly, if all of these other
consciousnesses were around, I would have no way of knowing it (nor
would they be aware of what I like to call "me").  But really...

    Some possible flaws in the reasoning:
   
    * I have a misconception about the strong AI position.

    * My notion of consciousness is hogwash.

    * Consciousnesses can somehow be "superimposed" to form a single
      subject.

    * Consciousness does not depend on the network as a whole, but some
      particular structure within it.  Still, I think this falls to a
      similar axe of absurdity.

    * The connection of the remaining node to the other N-1 changes
      things somehow, inhibiting a separate consciousness.

    * Others?

    Criticism, constructive or destructive, would be appreciated.

John Sichi
jvsichi@ocf.berkeley.edu


