Newsgroups: comp.robotics
Path: brunix!uunet!munnari.oz.au!bruce.cs.monash.edu.au!monu6!awfuchs
From: awfuchs@monu6.cc.monash.edu.au (A.W. Fuchs)
Subject: Re: MIT Insect Robots
Message-ID: <1992Jun8.095241.9328@monu6.cc.monash.edu.au>
Organization: Monash University, Melb., Australia.
References: <1992Jun3.213155.24343@seer.gentoo.com> <1992Jun4.172854.19251@elroy.jpl.nasa.gov> <48844@dime.cs.umass.edu> <1992Jun8.001739.21290@seer.gentoo.com>
Date: Mon, 8 Jun 1992 09:52:41 GMT
Lines: 90

In <1992Jun8.001739.21290@seer.gentoo.com> tomk@seer.gentoo.com (Tom Kunich) writes:

>In article <48844@dime.cs.umass.edu> connolly@rabbit.cs.umass.edu (Christopher Ian Connolly) writes:

[stuff deleted]

>>I want to qualify this by saying that I think the "robotic insect"
>>concept will probably be a fruitful venture, but it does have its
>>obvious limits.  After all, we will eventually want to progress to
>>robotic mammals, won't we?
>>
>Hah! Good view of the problem. And I think that sort of sums the
>whole thing up in a nutshell. People were striving for 'robotic
>mammals' (people if you will). This turned out to be virtually
>impossible so someone comes up with the 'robotic insect' idea.

Not necessarily impossible; it's just that it makes more sense
to start on something simpler. Goddard, for example, was not
trying to build the Space Shuttle.

But this is somewhat beside the point. The entire approach is
different to the rationalistic AI approaches, which try to model
the operation of the humand mind, making (large!) assumptions
about what that is -- often based on experimental data about how
humans appear to solve problems or remember things.

The incremental approach of building simple creatures before
attempting more complex ones seems eminently sound to me, and
involves a different set of principles as well. It is based
largely on the notion that for a life form (or whatever) to
operate successfully in the world, it must grow from a simpler
one which also operated successfully in the world; the "new
and improved" model can then add functionality or improve
behaviour, relative to "last year's model", without needing
to be recreated.

Brooks's statement that nature took 3 billion years to develop
from unicellular life to vertabrates, then about 450 million
years to develop primates, then 120 million years for man,
then 2.5 million years to right now, is worth considering
seriously. The idea is that the capability to live and react
in the world is the hard part, and given such capabilities,
more "intelligent" life begins to develop.

There are valid doubts about the feasibility of these arti-
facts being able to physically evolve, thereby maintaining
their (presumably) successful coupling with the environment.
It may be that human -- or other -- assistants can aid the
evolution of these "creatures" in a way which is as workable
as Nature's, but this is not at all certain.

>Now, granted, there are some attractive advantages in this method.
>But basically it is a fall-back position from the real 'machine
>intelligence'that was the original target.

I don't believe this is the case. Is "intelligence" synonymous
with "humanoid"? (My concise Oxford includes such synonyms as
"understanding" and "intellect", and allows it to be applied
to animals as well as people.)

>I don't believe that any of this is impossible per se' but the 
>practicality is certainly in question. And not just by me. If there
>was a valid reason to replace a man with an intelligent machine and
>it was economically attractive, there would be smart machines
>everywhere by now.

Why "by now"? Does this also mean that if it were a desirable
and economically attractive thing for cancer to be cured, then
a cure would already have been found?

To summarize, I find the MIT "insect" approach very refreshing
and possibly, in the long run, the *only* way to approach the
problem of artificial intelligence/life in the real world (as
opposed to in an information space, which may be just as valid,
but much less Frankenstein-satisfying, n'est pas?)

Reference:

Intelligence without representation, Rodney A. Brooks
Artificial Intelligence 47 (Jan 1991) pp. 139-159

This was actually received in 1987; was it too heterodox to
be published then? It's a wonderful read and I recommend it.


Andrew W. Fuchs
Faculty of Computing & Information Technology
Monash University, Melbourne, Australia

--- awfuchs@monu6.cc.monash.edu.au ---
