From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!apple!netcomsv!nagle Tue Jan 21 09:26:33 EST 1992
Article 2820 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!apple!netcomsv!nagle
>From: nagle@netcom.COM (John Nagle)
Newsgroups: comp.ai.philosophy
Subject: Re: Building Artificial Animals (was Re: Cargo Cult Science)
Message-ID: <1992Jan17.064740.17375nagle@netcom.COM>
Date: 17 Jan 92 06:47:40 GMT
References: <92Jan15.175909est.14446@neat.cs.toronto.edu> 	<1992Jan16.061242.21335@news.media.mit.edu> 	<1992Jan16.190930.14079nagle@netcom.COM> <YAMAUCHI.92Jan16220910@heron.cs.rochester.edu>
Organization: Netcom - Online Communication Services  (408 241-9760 guest)
Lines: 83

yamauchi@cs.rochester.edu (Brian Yamauchi) writes:

>In article <1992Jan16.190930.14079nagle@netcom.COM> nagle@netcom.COM (John Nagle) writes:
>	For a start, how about a discussion about the behavioral
>capabilities that mice have that today's robots lack?

>	But, in my opinion, the most interesting of the mouse's
>capabilities is the behavioral/motivational control structure that is
>sufficiently robust and adaptive to allow it to survive and prosper in
>an open field or a steel and concrete skyscraper.  When we can build a
>robot that can also successfully forage for food (i.e. electrical
>outlets, batteries, etc.) in a wide variety of environments, then I
>think we will have made significant progress towards basic
>animal-level intelligence.

      As Winograd once said, the important thing is deciding to do in
the next ten seconds.  The amount of decision-making required to get
through the next few seconds without falling down, running into anything,
injuring oneself, and making some progress on the task at hand tends
to be grossly underestimated.  And fundamentally, all higher-level
activity ends up being adjustments to the goals of the systems that
handle what to do in the next few seconds.  (This is what Brooks
means when he talks about "subsumption architecture".  I sometimes
call it the "back-seat driving metaphor".  A solid understanding of
how to build systems that handle the next few seconds is useful in
its own right for building robots.  But it also may well provide the
building blocks for constructing systems that solve longer-range problems.

      A major advantage of working on the problems of the next ten
seconds is that it tends to be obvious how well a given approach
works.  This leads to continued progress.

>	By the way, the JHU Beast research of decades ago seemed like
>a good step in this direction, but I have been unable to find any
>technical papers describing this work.  

        The machine itself is on exhibit at the Computer Museum in
Boston.  I'd suggest starting there and finding out if they have
technical data on the Beast.  The Hopkins Beast was a major advance
in the AI field, and one that is not well recorded in the literature.
It would be a worthwhile project for someone in the Boston area to
work with the Computer Museum and make its technology more widely known.
I'd like to do it, but I'm 3,000 miles away.
>Does anyone know what
>sensing was used and how robust this system was?  Was this something
>that would only work in the lab or would it work in any building with
>level floors and accessible outlets?
        It searched out outlets with a template-matching vision system,
and looked for standard outlet boxes installed at a specific height.
I think it used ultrasonics to center itself in hallways.  It was 
intended to work in a specific building with hallways of known properties,
but it had to share those halls with other traffic.

>	Another issue is whether we need to build real robots, or
>whether simulation alone is enough.  In my opinion, a sufficiently
>complex and realistic simulation can produce interesting results, but
>the trick is deciding which simplifications you can make, and which
>you can't -- without completely changing the problem.

         For an insect-level intelligence, a reasonably simple 2 1/2D
simulation is enough to, say, demonstrate a control system for 
six-legged walking.  A much more elaborate simulation will be needed
to provide a good environment for a mouse.  Mice can do two-handed
coordinated grasps using multifingered hands.  This will take a 
simulator well beyond anything existing today.  But there's no
conceptual obstacle to developing such a simulator, and some of the
advanced animation people are moving in the right direction.  The
simulator will take a big engine to make it go, but big engines exist.
Only in the last few years has there been enough compute power 
available to even consider such a simulator, but now it's within
reach.  A good place to start in reading up on the state of the
art is "Making them Move", by Zeltzer.  (The video that goes with
it is worth ordering.  MIT Press.)

>	Finally, could you elaborate on the NSF program?  
  
      Dr. Ken Laws, who used to head NSF's robotics effort,
suggested to me that developing a good simulator along 
the lines indicated above would fit in with current NSF funding plans
in the robotics area.  So I suggest those in academia going in this
direction pursue it further.

					John Nagle


