From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!ames!apple!netcomsv!nagle Thu Jan 16 17:22:29 EST 1992
Article 2793 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!ames!apple!netcomsv!nagle
>From: nagle@netcom.COM (John Nagle)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo Cult Science
Message-ID: <1992Jan16.190930.14079nagle@netcom.COM>
Date: 16 Jan 92 19:09:30 GMT
References: <92Jan15.175909est.14446@neat.cs.toronto.edu> <1992Jan16.061242.21335@news.media.mit.edu>
Organization: Netcom - Online Communication Services  (408 241-9760 guest)
Lines: 46


     The idea that a theory should be "refutable" refers to refutability by
experiment.  To be useful, a theory must make predictions.  If the predicted
phenomena don't occur, the theory is wrong.  That's what "refutability"
is all about.

     This is well-understood in physics, where it is expected that a
major theory will make some unexpected prediction which can then be 
confirmed or refuted by experiment.  Special relativity, quantum
mechanics, and the Standard Model all made unexpected predictions,
and the predictions were verified by experiment, giving a strong boost
to the theory.  Yet one solid experimental result could refute any of
the great theories of physics.  When a few decades go by, and attempts
to construct an experiment to refute a theory consistently fail, the
theory is generally accepted.  That's how refutability works.

      The physicists are presently facing a philosophical problem with 
superstring theory.  Superstring theory describes events at so small
a scale and so high an energy level that no one can conceive of any
way to test them experimentally.  Some physicists question whether
superstring theory is even physics for that reason.

      The problem with AI is not that refutability is impossible.
It's that the state of the art is lousy.

      Thought for today: If we can build a low-end mammal robot, say
a mouse-level AI, with the coordination, dexterity, and vision of a
mouse, we will probably be most of the way to a human level AI, based
on how long evolution took, how much the anatomy of the human and mouse
cortices are similar, and how little the DNA of mouse and human differs.
Moravec argues that an ant-level creature should take about 10 MIPS to
implement, and indeed, ant-level creatures from Brooks, Beer, and the
SimAnt people all require on that order of compute power.  Moravec's
numbers scale up with brain mass; a mouse should take around 10,000 MIPS,
and a human around 10,000,000 MIPS.  This latter number is still out
of reach (although Danny Hillis would probably be willing to quote
a price), but the 10,000 MIPS figure is within the range of existing
supercomputers.  A "slow" mouse (running at 1/10 real time in a simulated
VR-like world) should take only 1000 MIPS.  Computational power like
that is available around places like MIT.

      It's time to build a "slow mouse".  We have the compute power.
We have a reasonable fraction of the techniques necessary.  NSF has
some money available this year.  So let's get started.

					John Nagle


