Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!kovsky
From: kovsky@netcom.com (Bob Kovsky)
Subject: Re: Is the mind/brain deterministic?
Message-ID: <kovskyCzBLv2.IsI@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <kovskyCz0B4G.Aqr@netcom.com> <HPM.94Nov12041452@cart.frc.ri.cmu.edu> <kovskyCz7q87.I4q@netcom.com> <HPM.94Nov13161842@cart.frc.ri.cmu.edu>
Date: Tue, 15 Nov 1994 17:57:02 GMT
Lines: 211

In article <HPM.94Nov13161842@cart.frc.ri.cmu.edu>,
Hans Moravec <hpm@cs.cmu.edu> wrote:
>
>Thanks to Bob Kovsky for his interesting response and bio.
>
>I agree we are unlikely to formalize the vast majority of human
>thought by hand, and that engineering tinkering, massive automatic
>learning and just plain chance and natural selection are going to be
>essential in building practical AIs.  But this criticism is mostly of
>the straw man of the most simple-minded expert systems. Robotics is a
>poor target for the criticism, since, except for the early
>blocks-world programs in the late 60s and very early 70s, it has used
>all the latter approaches.
>
>And, I must defend my robots against flies.
>
>>a fly's navigational and motion-control systems enable it to execute complex
>>activity both on flat surfaces and in the air.  What a major struggle
>>it is to kill one!  It's brain, operating at a speed equivalent to a
>>thousand instructions per second ...
>
>WRONG! As I explained in an earlier message, my own estimates, based
>on a conversion of neural power to computation, extrapolating from a
>comparison of retinal edge and motion detectors with similar robot
>vision operators, put the fly's computer power somewhere around 100
>MIPS.  Other people, demanding higher fidelity, get higher numbers.
>Existing computers may look powerful when compared to human's
>ridiculously inefficient abilities to do arithmetic, search data or
>follow long reasoning chains, but they have only begun to creep past
>the fly nervous system stage in overall power.  And you can bet that
>evolution wouldn't allow flies to waste any of their 100 MIPS.
>
>> or so, and of a minute size, also controls bodily functions and
>> exercises control over digestive and reproductive functions.  That you
>> and your associates need to invest such enormous resources in order to
>> accomplish something that does not even perform as well suggests that
>> there is something missing in your approach.
>
>No, it suggests that your calibration between "puny" fly brains and
>"enormous" computers is way off.  In fact, our barely fly-brain robots
>do things that flies can't, such as maintain a pretty complete 3D
>representation of their surroundings, so they can tell exactly where
>they are in a room.  Of course, we expect we will squeeze more
>performance out of our programs when we go commercial, and have
>incentive to optimize, optimize.  Flies have been optimized for
>billions of generations, so they are about as good as possible at what
>they have to do.  But they can be fooled pretty easily out of their
>normal operating range.  Ever notice how stupid flies are about fly
>paper?  Or moths are about lamp lights?  Our current best robots are
>just about as smart and stupid as insects, just in different ways.
>
>Insects still have the edge in miniaturization, but they've reached
>their optimum, for a all practical purposes, but robot brain density
>is doubling every year or two.  They'll be leaving insect power behind
>during the next decade.
>
>> 	My sarcastic remark about "world-shaking breakthroughs"
>> stands.  ...there is very little evidence that the model of universal
>> mechanical computation has any validity.  And very little return on
>> the investment.
>
>Universal computation has only created the largest, most
>world-changing industry ever on earth.  Ask IBM and AT&T or Apple and
>Microsoft if they're getting any ROI.  SHEESH!
>
>I also (again) reject as silly Bob's arbitrary exclusion of unstable
>fluids and other high gain systems from the parts bin from which one
>can make machines. The edge of chaos, between boiling randomness and
>frozen immobility is a useful concept, but Bob is trying to make it
>into some kind of religious principle.  Most high-gain systems, for
>instance, a balancing broomstick, can be interpreted as operating
>there.  An especially clear example was in the old days of radio, when
>"super-regenerative" receivers recycled the output signal from an
>amplifying tube to get more gain.  Turn up the "regeneration" too
>high, and you got howling signal-blanketing feedback.  To low, and you
>could hardly hear anything.  Just right, at the edge of chaos, you
>heard your station nice and loud.  By Bob's definition a
>super-regenerative receiver (and almost any other machine that uses
>amplification, which, deep down, usually depends on some kind of
>chaotic process that allows small inputs to become large effects)
>would be disqualified as a machine.  No thanks.
>
>		-- Hans Moravec   CMU Robotics
>
>
>

	Prof. Moravec and I have been debating:  the primary topic of this 
thread, my criticism of current AI approaches that ignore biological function, 
and my suggestion that artificial neuronal networks on the "edge of chaos" may 
be a more fruitful approach. 

	Prof. Moravec has been building "a mobile robot control program with 
enough spatial competence to reliably execute tasks like delivery and floor 
care in normal areas, with no special navigational cues..."  He reports a 
series of problems, but believes that "100 to 1000 MIPS of computer power will 
suffice to safely guide a walking-speed robot..."

	My response is that a housefly seems to do quite as well with much 
less brainpower.  (Of course, the fly "fails" on window glass.  Perhaps a 
metaphor for the practitioners of AI -- see below.)

	Prof. Moravec responds:  "WRONG! As I explained in an earlier message, 
my own estimates, based on a conversion of neural power to computation, 
extrapolating from a comparison of retinal edge and motion detectors with 
similar robot vision operators, put the fly's computer power somewhere 
100 MIPS.  Other people, demanding higher fidelity, get higher numbers."

	Reply:  If your estimates are based on the actual operations of the 
fly's brain, you are far in advance of any known neurologist or brain 
theorist.  But your estimates are "based on a conversion of neural power to 
computation," and on the hypothetical belief that such a conversion is 
possible.  Even a regular polygon with 100 million sides is not "quite" a 
circle.  My intuitive sense is that the size of the fly's brain, its need to 
govern other functions, and a synaptic discharge rate of under 1000 per 
second supports no such "equivalent IPS."

	Prof Moravec continues:  "In fact, our barely fly-brain robots do 
things that flies can't, such as maintain a pretty complete 3D representation 
of their surroundings, so they can tell exactly where they are in a room."

	Indeed, and this is the point of my earlier contention that 
"responsiveness is a more fruitful concept than representation."  A 
representation serves as a mediating structure between perception and action.  
But responsiveness needs no such mediating structure.  When I look for a book, 
I do so at the actual bookshelf, not on a representation of the bookshelf in 
my mind.  I respond to cues to make my search more efficient, including my 
knowledge of my habits and the organization (such as it is) of books on the 
shelf and my recollections of the book's appearance.  These cues are not 
organized into an overall structural representation, nor can they be so 
organized.  Even if my knowledge or recollection is faulty, I will find the 
book eventually.  This requires an exercise of consciousness and (to a 
small degree) freedom.


	Freedom is the crux of the problem.  We are free and I have detailed 
the phenomena of freedom at length in materials at the ftp site indicated 
below.  This medium does not permit a general discussion of those phenomena or 
of the conceptual foundations of my work.  "Freedom" conflicts with any model 
that universalizes computation and conflicts also with the model of reality 
expressed in the deterministic differential equations used in physics.  The 
resolution of these conflicts presents a difficult problem that I believe I 
have resolved.

	In brief:  our "minds" construct images out of formal elements arising 
from neuronal processes.  Because of systemic limitations and errors 
incorporated in the elements, neither the images nor the models they support 
correspond precisely to reality.  Under some circumstances, such as those 
encountered in the laboratory, the images can be made to approximate reality 
as closely as desired, as rigors of cleanliness, constraints, and measurement 
are more perfectly imposed.  But in other areas of human activity, the 
approximations are less satisfactory or fail altogether. 

	Because we are free, a deterministic model does not accurately 
describe the functioning of the brain.  There is good research that indicates 
that at least some portions of the brain operate in the chaotic region.  
(See Skarda and Walter J. Freeman in 10 Behavioral and Brain Sciences (1987) 
161 and also an article by Prof. Freeman in Scientific American c. 1989.)  
There are other reasons that suggest that a chaotic neuronal-type model might 
be fruitful for investigating the actual processes of actual brains and for 
advancing the cause of AI.  This will have to be done (like most engineering) 
by trial-and-error rather than comprehensive a priori design.

	Prof. Moravec states:  "I also (again) reject as silly 
Bob's arbitrary exclusion of unstable fluids and other high gain systems from 
the parts bin from which one can make machines. The edge of chaos, between 
boiling randomness and frozen immobility is a useful concept, but Bob is 
trying to make it into some kind of religious principle.  Most high-gain 
systems, for instance, a balancing broomstick, can be interpreted as 
operating there.  An especially clear example was in the old days of radio, 
when "super-regenerative" receivers recycled the output signal from an
amplifying tube to get more gain.  Turn up the "regeneration" too
high, and you got howling signal-blanketing feedback.  To low, and you could 
hardly hear anything.  Just right, at the edge of chaos, you heard your 
station nice and loud.  By Bob's definition a super-regenerative receiver (
and almost any other machine that uses amplification, which, deep down, 
usually depends on some kind of chaotic process that allows small inputs to 
become large effects) would be disqualified as a machine.  No thanks."

	A machine incorporating "high gain" or "chaos" is still a machine.  
Indisputable.  But it is a very different proposition to assert that:  there 
exist physical systems operating on the edge of chaos that are <not> 
machines.  I suggest you have one in your head.  Can you seriously contend 
that the freedom that gives you such enjoyment in your work is illusory?

	Nor is this a "religious principle."  To say that there are things we 
do not understand is different from asserting the existence of a super-human 
consciousness that does understand things (which is what I would call "a 
religious principle").  

	I repeat again:  in its goal of creating general purpose machines 
that can competently perform those functions within the reach of even 
insects, AI has failed.  The success of computers in such limited domain 
tasks as keeping records, crunching numbers, processing words and 
facilitating telecommunications notwithstanding.

	Given enough computer power and time, and his own determination, 
Prof. Moravec may well succeed in building a mobile robot control program 
that can perform in a relatively clean indoor environment.  I am more 
skeptical about its ability to navigate on the street where speedy response 
to happenstance events is needed.  And I see little likelihood that a robot 
will ever be able to navigate in a forested wilderness.  It is there that 
a fly functions superbly.  The reasons for its superiority deserve 
consideration.


-- 

*   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *   * 
    Bob Kovsky          |  A Natural Science of Freedom 
    kovsky@netcom.com   |  Materials available by anonymous ftp
                        |  At ftp.netcom.com/pub/freeedom
*   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *   * 
