Newsgroups: comp.ai,comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uhog.mit.edu!news.media.mit.edu!mind.mit.edu!user
From: push@mit.edu (Pushpinder Singh)
Subject: Re: Minsky's new article
Message-ID: <push-1611941508400001@mind.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT
References: <39vo5u$3fv@trog.dra.hmg.gb>,<3a0rt0$6dl@cantaloupe.srv.cs.cmu.edu> <3ad3uo$2ji@trog.dra.hmg.gb>
Date: Wed, 16 Nov 1994 20:08:40 GMT
Lines: 111
Xref: glinda.oz.cs.cmu.edu comp.ai:25248 comp.robotics:15395

In article <3ad3uo$2ji@trog.dra.hmg.gb>, wagray@taz.dra.hmg.gb wrote:

> In article <3a0rt0$6dl@cantaloupe.srv.cs.cmu.edu>, hpm@cs.cmu.edu (Hans
Moravec) writes:
> >
> >wagray@taz.dra.hmg.gb (Walter Gray):
> >>
> >>The AI community has shown no signs of being able to produce a machine 
> >>with the intelligence of an ant. (Not even a stupid ant). This is why the 
> >>brain replacement idea sounds silly.
> >
> >Extrapolating from a comparison of retinal and computer vision
> >operations, "Mind Children" gets a conversion factor between nervous
> >system neuron counts and computer MIPS: roughly, each neuron is worth
> >100 instructions/second.  By that measure, the kinds of computers on
> >your desktop, or in the fanciest self-contained robots, have just in
> >the past few years become a computational match for insect nervous
> >systems.
> 
> Let me get this straight. Are you seriously suggesting that an ordinary
> workstation could control an ant-robot? Let's allow it to be as big as
> you want for the sake of practicality. Could you control all the 
> joint articulations, reflex arcs, homeostasis, social behaviour, senses
> etc etc in real time to emulate all (or most) behaviour of the ant?

Yes, that's exactly what he is suggesting.  My guess is that the only
reason it hasn't been attempted is because we don't understand ants well
enough.  It does seem, though, that much of their apparent complexity
comes from the complexity of their interactions.  But we all know that
interactions among large numbers of simple components (even when they are
very simple sorts of interactions) can result in complex behavior on higher
levels of the system, so the complex social behavior of ants doesn't
imply much about the complexity of an individual ant.

Why do you think we would need something more than a modern workstation
to control an ant?  What is the source of your pessimism?

> And if I were to say wasp or cockroach, would you not suddenly need a
> combinatorially large increase in CPU power? I am happy to accept that
> you can produce some ant-like behaviour, however you like to define that.
> However, I cannot (yet) accept any simple proportionality between
> CPU power and neuron count because they don't seem to be doing the 
> same job.

CPUs and neurons are indeed different.  Yet I believe Hans is talking here
about a kind of abstract computational equivalence.  The "algorithm"
that brains use to solve some problem, say the shape-from-shading problem,
might be totally different from the ones we use today in machine vision
systems.  Yet they both solve the same problem, so you can get a rough
measure of CPU power to neuron count.  Hans has extrapolated from the kinds
of things retinas do compared with the complexity of programs that do similar
things to get his numbers, and I believe he has admitted that they could be
orders of magnitude off.  But that only seems to push 'Human-equivalent'
hardware back another 20 years or so.  If his assumptions are correct,
we'll certainly have it before the end of the next century, and probably
much sooner.

> >And, by no coincidence in my opinion, their performance is starting to
> >be notable.  This year chess computers have beaten Gary Kasparov, OCR
> >programs are often better than human transcribers, speech recognition
> >is starting to come into practical use, Mathematica is on my laptop,
> >and and so on.  These programs don't do what ants do, but conversely
> >no ant can do any of the above.
> >
> >And, in the more ant-like realm of behavior, last year CMU's Navlab II
> >drove itself 150 km from Pittsburgh to the Ohio border at full legal
> >speeds, in traffic, driving on main roads, highways of various widths,
> >taking on and off ramps.  It was controlled overall by a program that
> >used a road map and a GPS system, but it stayed on the roads using a
> >bank of neural nets, each for a different kind of road, each trained
> 
> [del]
> 
> Well done CMU! (We have virtually given up on full scale demonstrators
> for economic reasons.) However, all these examples show what can be
> achieved in artificial tasks in constrained environments. Given the
> vast increases in CPU power since the 1960s, I don't think we have come
> very far from 'blocks world'.

It seems to me that there are an enormous number of problems in cognition
that are still best studied in microworlds or similar constrained
environments.  So not having come very far from 'blocks world' isn't
necessarily a bad thing.

> >"Mind Children" also examined this century's computer progress, and
> >noted that amount of computation per unit cost has doubled every two
> 
> [del]
> 
> >capacity per dollar every 1.5 years.  At that rate, human-brain
> >equivalent computational power will be here by 2025!
> >
> >That doesn't change the fact that, today, we're just creeping past
> >insect power.  And the performance that is being achieved belies your
> >pessimism.
> 
> So you say that we have the CPU power ("we have the technology!"). Umm, 
> doesn't that remove many of the excuses for our lack of progress?
> Clearly something is missing from the process. Perhaps some sort of
> new insight or breakthrough. (But don't ask me what.)
> 
> I think I'll hang on to my pessimism for a *little* while longer. 

There are a good many people in the field who do believe it is essentially
a software problem at this point, or will be soon.  New insights and
breakthroughs are needed, but my guess is that many of them will be in
figuring out how to get past barriers of complexity that have prevented
people from drawing together _existing_ solutions in order to build
intelligences.  I don't think any simple, unified archiecture will do.

-push
