From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!umn.edu!spool.mu.edu!agate!twinkies.berkeley.edu!epfaith Mon May 25 14:07:04 EDT 1992
Article 5835 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!umn.edu!spool.mu.edu!agate!twinkies.berkeley.edu!epfaith
>From: epfaith@twinkies.berkeley.edu (Edward Paul Faith)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Real vs. Virtual (formerly "on meaning")
Keywords: symbol, analog, Turing Test, robotics
Message-ID: <vhogcINNfit@agate.berkeley.edu>
Date: 22 May 92 03:06:52 GMT
Article-I.D.: agate.vhogcINNfit
References: <1992May20.221931.20652@news.media.mit.edu> <veq4jINN46u@agate.berkeley.edu> <zlsiida.293@fs1.mcc.ac.uk>
Organization: U.C. Berkeley Math. Department.
Lines: 54
NNTP-Posting-Host: twinkies.berkeley.edu

In article <zlsiida.293@fs1.mcc.ac.uk>, zlsiida@fs1.mcc.ac.uk (dave budd)
writes:

>In article <veq4jINN46u@agate.berkeley.edu> epfaith@purina.berkeley.edu (Edward
>Paul Faith) writes:

>>Here is a problem which has bothered me for a
>>long time:

>>Suppose we succeed in running a truly
>>conscious program on a computer made up of
>>two computers communicating to each other as
>>the right and left lobes of the brain do.  As we
>>run the program, we record the messages passed
>>back and forth from the left computer to the
>>right computer.  Later we reset the computers to
>>the initial conditions, but this time we only turn
>>on the left computer, and play back the signals
>>that we recorded earlier that the right computer
>>had sent to the left computer.  We could do this
>>if the implementation were perfectly digital,
>>since then we could anticipate completely the
>>behavior of the left computer in response to the
>>prerecorded signals.

>>My question is, would there be consciousness?
>>Would there be a sort of half-consciousness?  If
>>the thought experiment is flawed I invite anyone
>>to improve it.

>Your experimental setup implies that a whole brain is required for
>consciousness.  This is not implied by any 'real world' data that I know of.
>And I'm pretty sure there are medical cases around in which consciousness
>carries on fine with various parts of the brain missing, including the
>separation of the lobes.  Unfortunately the only book I have handy is The
>Man Who Mistook His Wife For A Hat, which has lots of wacky info on various
>brain conditions but isn't very useful as a textbook.


My problem does not concern whether partial brains are capable of
being conscious.  My problem concerns what happens when, not
only are partial brains activated by themselves, but they are fooled
into thinking that they are simply part of the whole brain.  I am
attempting, not to see how small a system can contain a viable
consciousness, but to chop up the consciousness that occurs in brains
however small.  I contend for the moment that there is indeed
consciousness in the left computer, whether or not the left computer
is connected to the right computer.  It follows that the human brain
contains not one consciousness, but three, the central, the left, and
the right, and that the left and the right form a part of the central.
In this way we see how to divide up consciousness until we reach
the bottom-most level.

I should hope that no one will accept this conclusion.


