From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!munnari.oz.au!yarra-glen.aaii.oz.au!dnk Mon May 25 14:07:16 EDT 1992
Article 5853 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!munnari.oz.au!yarra-glen.aaii.oz.au!dnk
>From: dnk@yarra-glen.aaii.oz.au (David Kinny)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Keywords: symbol, analog, Turing Test, robotics
Message-ID: <1992May22.201134.2928@yarra-glen.aaii.oz.au>
Date: 22 May 92 20:11:34 GMT
References: <veq4jINN46u@agate.berkeley.edu>
Organization: Australian Artificial Intelligence Institute
Lines: 54

epfaith@purina.berkeley.edu (Edward Paul Faith) writes:

>Here is a problem which has bothered me for a
>long time: 

>Suppose we succeed in running a truly
>conscious program on a computer made up of
>two computers communicating to each other as
>the right and left lobes of the brain do.  As we
>run the program, we record the messages passed
>back and forth from the left computer to the
>right computer.  Later we reset the computers to
>the initial conditions, but this time we only turn
>on the left computer, and play back the signals
>that we recorded earlier that the right computer
>had sent to the left computer.  We could do this
>if the implementation were perfectly digital,
>since then we could anticipate completely the
>behavior of the left computer in response to the
>prerecorded signals.

>My question is, would there be consciousness?
>Would there be a sort of half-consciousness?  If
>the thought experiment is flawed I invite anyone
>to improve it.

If the left computer is receiving perceptual input from anything
other than the right computer, then it is extremely unlikely that
we could anticipate completely its behaviour.  The prerecorded
signals being sent to it would rapidly become unsynchronised and
inappropriate.  Assuming some sort of robustness in the system
design, the left computer would probably decide that the right
one was failing, and stop listening/talking to it.

If on the other hand, we assume that we can replay *all* the
relevant input in a perfectly synchronised manner, and the
left computer behaves perfectly deterministically, then its
behaviour is indistinguishable, and it is as concious as it ever
was.  And the right computer is unconcious, as it is turned off.
Perhaps the tape-recorder is concious, in its own rigid way :-)

It is worth pausing to reflect that it is almost impossible to
achieve this degree of control over any moderately complex
computer system.  Suppose I wanted to restore a small network of
workstations to the same initial state.  Think about it.
Exactly the same values for internal clocks.  Exactly the same
*physical* layout of files on disk.  Start them up in exactly
the same way as previously, so that the disk drives are rotating
at the former velocities and relative phases.  In practice it would
be impossible.  Differences in any of these things may cause
timing changes that lead to the system behaving differently.
Any computer system sufficiently complex to be "concious" is going
to exhibit apparent non-determinism, even if we embed it in a
completely controlled virtual reality.


