Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!kovsky
From: kovsky@netcom.com (Bob Kovsky)
Subject: Re: Robot autonomy (was Is the mind/brain deterministic?)
Message-ID: <kovskyCzKpKI.HM3@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <kovskyCzF8D4.Bxv@netcom.com> <jqbCzHKJC.B6L@netcom.com> <kovskyCzIvqt.CHn@netcom.com> <HPM.94Nov19153115@cart.frc.ri.cmu.edu>
Date: Sun, 20 Nov 1994 15:55:30 GMT
Lines: 145

This will be my last post for a while.  Professional duties and personal 
pursuits will take most of my available time.  I will read the conference, 
but do not have the time to compose postings.

Moreover, we seem to have reached the stage of:  "Yes, it is" and "No, it 
isn't."

Please consider this also as a response to Mr. Balter's previous 
posting.  

In a previous posting, I wrote:

>Bob Kovsky:
>> Nonetheless, I stand by my earlier remark that AI (in the sense that
>> it asserts "cognition is computation" etc.) has failed to produce
>> results that evidence the truth of the assertion.  Thus my reference
>> to the few "expert systems" that have proved practical only in limited
>> domains.

Prof. Moravec responds:
>No evidence contradicts "cognition as computation" and much supports
>it.  That computation is still short of human cognition is easily
>justified by noting that computer power is still a million times short
>of human brain power.  If it's still short in 30 or 40 years you may
>have a case, but by then you'll probably be arguing it with an AI.
>

	I have presented my views in full in the materials at the ftp site
below.  In brief, the thesis that "cognition is computation" assumes that
experience is mechanistic and implicitly denies any exercise of freedom. 
But the phenomena of freedom exist, such as our ability to write a visual
description of a visual image in pursuit of a purpose.  I know of no
mechanical means of accomplishing this task in a general way (i.e. for any
given visual image and any appropriate purpose). 

	Since my last posting, I have read Prof. Moravec's 1988 book 
<Mind Children>:

	Page 2:  "We are very near to the time when virtually no essential
human function, physical or mental, will lack an artificial counterpart." 

	Page 6:  "I believe that robots with human intelligence 
will be common with fifty years."  

	Extracts from pp. 48-49:  "Learning could be greatly enhanced by
the addition of another major module, a general <world simulator>.  ... 
Imagination via simulator is useful only if the simulator makes reasonably
accurate predictions about the real world.  ...  Advanced robots may find
themselves working with other robots and people.  Such an interaction
could be made more effective if the simulators on these machines predict
the behavior of others to some extent.  Part of the prediction might
involve roughly modeling the other's mental state, so that its reactions
to alternative acts could be anticipated."


I wrote:
>> I also stand by my reference to the capacity of a fly to navigate
>> just about as well as robots guided by 100 MIPS computers.  And I
>> stand by my opinion that those who ignore the capacity of biological
>> neuronal systems to function quickly and competently on the basis of
>> small and slow cognitive units are missing something important.
>

Prof. Moravec responds:

>The comparison of the fly with the robot is overwhelming evidence that
>cognition is computation, scaled at 1 million fly neurons of
>cognition being worth 100 million operations/second of computation.
>
>The important point that you ignore is that it takes only a small step
>up in abstraction of description, for instance from 100 to 1000 slow
>but awesomely numerous and spectacularly interconnected neurons to the
>level of local image operations to get an equivalence with
>spectacularly fast but very narrow-path computations
>
>     100 neuron structure  -> \
>                               >  10 edge detections/second
>10,000 computer ops/second -> /        at a single image spot
>
>The cost of computational emulation is linear in the amount of stuff
>being emulated, and the coefficient drops as the level of emulation
>abstraction rises.  Computation can emulate neural functioning well
>enough, using about 100 computations/second per neuron.

	Without checking the numbers, this is essentially the same 
approach Prof. Moravec used in <Mind Children>.  Prof. Moravec's 
approach seems to be the following:  a neuron produces electrical 
"spikes" at an approximately average rate of about 100 per second.  
Therefore, since cognition is computation, each such spike corresponds to 
a computer operation.  

	This approach displays an appalling ignorance about neuronal
activity.  In the Winter 1988 <Daedalus> issue devoted to Artificial
Intelligence, George N. Reeke, Jr. and Nobel Laueate and neurobiologist
Gerald M. Edelman criticize the entire AI enterprise from a neurological
point of view.  Page 170:  "In particular, it is quite clear that nervous
systems do not work in anything like the way that has been assumed in the
standard AI paradigm."  Incidentally, Prof. Edelman (both in the <Daedalus>
article and in <Neural Darwinism> provides support for my assertion that
the images generated by our process of experience are only approximations
to reality, or, in Prof. Edelman's words, "nonveridical."  (It should be
pointed out that Prof. Edelman's approach to general issues is very
different from mine.)

	On page 68 of <Mind Children>, Prof. Moravec applies his
neurological approach and extrapolates from the historical rise in
computation power to conclude:  "If this rate of improvement were to
continue into the next century, the 10 teraops required for a humanlike
computer would be available in a $10 million supercomputer before 2010 and
in a $1,000 personal computer by 2030." 

	<Mind Children> bears a date of 1988.  We have traversed more 
than 1/4 of the distance from 1988 to 2010.  Perhaps Prof. Moravec will 
state the advances that have been made in the last six years toward 
reaching the predicted goal for 2010.  Similar predictions were made by 
Minsky et. al. beginning in the 1960's and continuing ceaselessly 
thereafter.  

	In fact, every slight advance demands ever-increasing investments 
of computational power and ever-increasing program size.  The modules of 
programs are tripping over one another and Scientific American ran an 
article a few months ago about the "software crisis."  And, despite 
successes in very limited domains, AI has failed to extend its reach into 
general problems like "verbal description of a visual image"  much beyond 
that achieved in Winograd's 1968 SHRDLU.  If I am poorly informed, please 
inform me.  (Most of my reading is in my professional practice of law, 
and I spend my leisure in pursuing freedom, both as an intellectual 
discipline and in other activities, so readings in artificial 
intelligence are not a first priority.)

	When I was a grad student in physics, it was sometimes said that 
the proper way to begin a grant proposal was to promise that the project 
would lead to a room temperature superconductor.  Perhaps a similar 
attitude can be found in some AI researchers.  Suddenly, of course, 
superconductors were discovered that, if not room temperature, were orders 
of magnitude ahead of previous developments.  Perhaps something similar 
will happen to AI.  On the other hand, it may be that the assumptions of 
AI practitioners need to be re-examined and that the progress we all 
desire will be the consequence.

-- 

*   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *   * 
    Bob Kovsky          |  A Natural Science of Freedom 
    kovsky@netcom.com   |  Materials available by anonymous ftp
                        |  At ftp.netcom.com/pub/freeedom
*   *    *    *    *    *    *    *    *    *    *    *    *    *    *    *   * 
