Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!yeshua.marcam.com!charnel.ecst.csuchico.edu!olivea!news.hal.COM!decwrl!amd!netcomsv!netcomsv!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Computation and consciousness
Message-ID: <vlsi_libCxy8nF.GpA@netcom.com>
Organization: VLSI Libraries Incorporated
Date: Thu, 20 Oct 1994 02:09:15 GMT
Lines: 87

Reply-To: shankar@vlibs.com

While highly skeptical of Roger Penrose's explanation for the 
origin of consciousness, I am also unconvinced by the arguments
put forth by proponents of strong AI regarding consciousness. 
To the latter, consciousness is nothing more than what an AI
program outputs with respect to its inputs, which in essence
equates consciousness to the passing of the Chinese room test.
(Imagine the ethical implications of this: can a person be legally
tried for murder if he 'kills' a super-smart AI program? Or is it
OK to kill a human as long as you clone him (making a back-up copy)?)
Some hard-headed strong AI proponents even seem to deny the
fact that consciousness exists. (I am not sure if they believe they
are themselves unconscious and just automatons. If so, I pity them
since they could only ask for the kind of ethical treatment non-
conscious objects can expect out of society.)
 
For those strong AI proponents who do acknowledge the existence
of consciousness, but believe that computation gives rise to it,
there are two important issues which need to be addressed regarding 
1. the temporal location, and 
2. the multiplicity
of consciousness.
 
Q1. Assuming that the execution of an AI program gives rise to 
    consciousness, how uniquely can this trace of consciousness
    be located in time? This further begs the question of the 
    definition of the term "execution". The execution of a deterministic
    computer program (with I/O) is completely characterized by its
    present state and inputs. Now consider a hypothetical computer that
    has its whole state in main memory, and the inputs in a (finite) input
    buffer. Assuming that a program was conscious during its
    (finite) time of execution on this machine, let us save the state
    and input of the machine for each clock cycle on a hard disk ( the fact 
    that a humongous disk would be required is besides the central issue).
    Once the program is done, let us load back the states saved on to 
    the hard disk back to main memory, in effect replaying the original
    execution trace. Note that the only difference between the first and
    the second is the fact that the state transition in the first case
    was done actively, while in the second case was done passively. 
    Strong AI is however not concerned with the physical agent that
    does the state transition (or the way state transition equations
    are implemented in hardware, which tautologically is a hardware issue). 
    The question is, would the second trace be as conscious as the first
    one? (If yes, wait for next question. If no, score: strong AI:0, weak
    AI:1.) Assuming that the answer is yes, it logically follows that
    consciousness arises out of a physical representation of states 
    that flow in positive (real?) time. Time for another gedaken 
    experiment. Instead of saving the execution trace of the program
    on a disk, let us beam it up into space in the form of parallel laser 
    beams, one for each memory and input bit. When a bit is high, the laser
    is turned on for that clock period, otherwise it is turned off.
    Now, in any plane perpendicular to the direction of the laser beams, 
    the initial execution is "replayed", if we now consider excitation 
    of atoms by light to represent logic levels. Because of the finite 
    speed of light and the infinitude of the number of such planes, 
    it directly leads to the conclusion that 1. there are an infinite 
    number of conscious entities and 2. their conscious streams are
    temporally distant by an amount equal to d/c where d is their
    physical distance.
 
Q2. If two machines execute the same AI program with the same inputs,
    clock synchronized, does that correspond to two individual conscious 
    streams or just one? It cannot be two because of the following argument:
    Electrically connect each node of one computer to the equivalent node
    of the other computer. Since both computers undergo the same trace,
    there are no logical conflicts in doing so. The net effect is that
    of doubling the sizes of the transistors. But strong AI makes no
    assumptions about the representation of states (beer cans are fine too), 
    least of all, transistor sizes. Hence, two consciousnesses = one.
    By induction, it follows that any number of execution traces at the
    same real time can have only one conscious trace, if at all. In fact,
    by devising a suitable gedaken experiment based on the preceding 
    paragraph,  it should be possible to remove the restriction of having
    the machines to be synchronized in real time. Then we can prove that
    ANY AI program that if conscious, has a multiplicity of one, ireespective
    of how many agents execute it and when. 
 
Therefore, consciousness may reside in the programs themselves rather
than in their execution. I think this argument merits serious consideration.
 
 
Shankar Ramakrishnan
shankar@vlibs.com
 
 

