Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!solaris.cc.vt.edu!uunet!tandem!oobii
From: oobii@cpd.Tandem.COM (oobii@cpd.tandem.com(Tim McFadden))
Subject: Re: Strong AI and consciousness
Message-ID: <Cwx6zA.2p4@cpd.tandem.com>
Followup-To: Strong AI
Originator: oobii@runt2
Keywords: AI, distriibuted, Penrose, Minsky
Sender: news@cpd.tandem.com
Nntp-Posting-Host: runt2
Organization: Tandem Computers, Inc.
Date: Fri, 30 Sep 1994 02:01:57 GMT
Lines: 109
X-Disclaimer: This article is not the opinion of Tandem Computers, Inc.


--------- about 2 pages long     ----------

This post is in the "strong AI" thread and the algorithmic computation
of consciousness. Many nits and defences may be added if there is interest.

Spose we agree upon a largest single, serial processor Y, of some volume V, 
which may be described algorithmically.  We agree that there is an 
upper limit, C, on the total information channel capacity in and out of Y.
Y must in "real-time" output to N separate, serial, bit-encoded transmission 
channels with fixed data rates. These lines will, in a well defined way 
operate some physical process, P, perhaps to simulate a human intelligence, 
run a planet, etc.  In otherwords, pure "computation" is not the test. 
Some sort of ordinary world interaction, P, is.  If Y cannot run P up to 
some agreed upon standard, then it "fails" the "P-test".  For the process P,
Y must use many sources of input data.

This is the rub, Y must somehow multiplex this data into a serial stream. 
Again the outputs to run P must be demultiplexed from a serial stream.

I agrue that the time taken for the mux/demux process, puts a fundamental 
limitation on the effective power of Y to run P. In other words, the 
interaction with the process P may become so complex and so high bandwidth 
that Y will fall behind and fail.

I degress to set up the next step of the argument.

Spose that over the next few hundred years, that we learn enough to describe 
the operation of all of the 100 some different neuron types in the human 
brain, algorithmically.  I assert that this does not mean that we can then 
explain the human brain algorithmically, with no further steps. In other 
words, knowing the algorithmic behavior of the simple processing elements of 
a large, complicated system doesn't mean that you understand the system. 
This is just a truism from complexity theory, e.g., cellular automata.

Here is the next step.

If Y fails because of the limitation of mux/demux time, does that imply 
that there is a cooperative network of processors, whose total size 
(processing power etc.) is some number K * Y, which can run P successfully, 
while a single, serial processor of size K * Y could not?

Now spose we agree that the process P, when it operates correctly and
up to speed is "conscious" or exhibits "conscious behavior".
This is independent of what sort of processor is running it.

Hence this argument allows for the case that there may indeed be a
purely algorithmic and single processor conscious robot. It also 
allows for the case that the robot would not be able to operate
in "real-time", up to some standard, P. A human brain, however,
by the preceding agument, might be able to.

Thus, it is possible that most useful/real-time conscious systems
must be non-alogrithmic, that is, heavily distributed.

In this model, consciousness has no physically strange properties,
a la Penrose. It may be algorithmic or not.

This makes a lot of sense if coupled with Gautama Shakya's 25 hundred year
old model of consciousness, that is, a delusion made up individually 
un-conscious agents, whose interaction produces the delusion of 
consciousness. Marvin Minsky, in his "Society of Mind", proposes
a similar model.

So why after 50 some years of computing, have we nothing near
a conscious robot?

What does this tell us? No laws of physics are broken, but the
basis for the argument may not end up being classical. The current
"best" way to jam a large number of networked small processors
together is living cells in a brain. Which of course evolved
"un-consciously", from the programming of the Big Bang. Nanotechnology may 
someday exceed in shear density of processors, however, having them 
communicate at his density is an unsolved problem.

This leaves room for water/carbon based life to be living off a sweet spot
which lets the really small cellular parts combine as processors real
well, at high densities. Water/carbon consciousness, delusionary as any
other kind, may be imitated algorithmically, but creating robots
of similar speeds and densities may be very difficult.

Way cool! This is good news. The scientists operating at the burnt
out end of Western rationalism and crisp logic may not be able to describe 
us humans algorithmically and replace us with easier to handle robots or
say "I've got your number - your Goedel number".

Just as only behavioristic psychology could get government funding 
for many years, due to a scientific fad and propaganda scheme, 
the information/algorithmic point of view is becoming quite
dominant, with the more soft alternatives being viewed as unscientific,
"so where'e the algorithm, there's got to be an algorithm?"
Just as the stringent propaganda requirements of behaviorism diminished 
humans' understanding (and human treatment) of ourselves, so does  the
information/algorithmic model. 

It will be really good news for us humans if the information/algorithmic
point of view is tempered and put in context.

As Gautama said in the Heart Sutra,

"Oh, Shariputra, Alokeshishvara doing deep Prajna Paramita looked
down and saw the emptiness of all five skandas.
Form is emptiness - emptiness is form..."

---------------------------------------------------
May the mindflower bloom in enternal spring.
---------------------------------------------------


