Newsgroups: comp.ai.philosophy,comp.ai,comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!gyro
From: gyro@netcom.com (Scott L. Burson)
Subject: Minsky's new article
Message-ID: <gyroCysG7u.8Hs@netcom.com>
Sender: Gyro@zeta-soft.com
Organization: ZETA-SOFT, Ltd.
References: <39d8g2$dlm@coli-gate.coli.uni-sb.de> <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu>
Date: Sat, 5 Nov 1994 09:40:41 GMT
Lines: 83
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:21689 comp.ai:24980 comp.robotics:15052

In article <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu> Hans Moravec (hpm@cs.cmu.edu) writes:
>
>Sean Matthews vents his bile:

First off, I think that's not a fair characterization of his critique, which
raised some entirely valid concerns.

>From chapter 5, summarizing responses to Hermann Oberth's 1923 book
>"The Rocket into Interplanetary Space" (originally in German):
>... well reputed astronomers ... simply killed the idea ... by stating
>that these things are very nice and interesting but lacking in
>foundation since everybody knows there can be no recoil in
>interplanetary space ...  Another critic ... added the idea of manned
>rockets was preposterous for all time to come because people, as soon
>as they left the atmosphere of earth (impossible anyway) would be
>subjected to the gravity of the sun which is powerful enough to squash
>their bodies. ... an aviation expert ... could not understand why the
>exhaust gases should follow the rocket if the latter, after some time,
>surpassed its own exhaust velocity. ... a physicist ... said the
>rocket, of necessity, could not surpass the velocity of its exhaust
>gases because its efficiency would surpass 100% ... obviously
>impossible. ... a mathematician and physicist published that even the
>most powerful explosive known could not lift its own weight to a
>greater height than 400 km ...

But Sean's claim was that Newton would have seen that Goddard's project was
sound.  Newton certainly had the intellectual tools (and, I suspect, the
insight) to refute all of these arguments, some of which are quite ridiculous
("the gravity of the sun would squash their bodies"?  Newton knew that gravity
obeys an inverse-square law).  He might not ever have had occasion to think
about rocket propulsion, but as we all know, his laws of motion are entirely
adequate to the task.

>Dr. Vannevar Bush, Senate testimony, 1945: 
>In my opinion such a thing is impossible ... People have been talking
>about a 3,000 mile high-angle rocket shot from one continent to
>another, carrying an atomic bomb and so directed as to be a precise
>weapon ... I think we can leave that out of our thinking.

Yes, we can produce any number of such stories.  Yet Sean's point stands: this
quote is about a supposed technological impossibility, not a physical one
(physics being the relevant science in this case).

>These seem silly in hindsight.  AI critics of 1994 will seem equally
>silly.  A future Matthews (while spleening on some future proposal)
>will note how critics of AI were just not paying attention in school,
>when it was obvious in 1994 that machines could think.  Why, by then,
>machines could read written text, understand speech, reason about
>complex subjects, navigate through the world, beat nearly everyone in
>intellectual games, not to mention accomplishing mathematical feats
>impossible for humans.

"Read"?  "Understand"?  "Reason"?  In each case what computers are doing is a
far cry from the usual meaning of the verb you are attaching to it.  I needn't
go down the list; you are well aware of this.

I think the opposite point is actually much easier to make.  Anyone who uses
computers in 1994 simply has to be impressed with their profound, infuriating
stupidity.

>			 And they were improving on all fronts at
>break-neck speed--each year some new barrier fell.  And anyway, it was
>obvious by then that intelligent mechanisms were possible, since the
>biologists had shown conclusively that humans themselves were
>mechanisms cobbled together by the trials and errors of Darwinian
>evolution.  Only fools denied it, or else they simply meant it
>couldn't be done that year, in which case they could be viewed as
>right.  But that new proposal --  why, that's strictly Analog stuff!

The irony here is that I think that the potential for AI to make machines more
useful, responsive, and easy to use has hardly begun to be tapped (partly
because the hardware we have had to tap it with has been so limited).  Unlike
a lot of people, I am "bullish", as the businessfolk say, on AI.  I think the
next decade may well see some massive breakthroughs, as hardware we hardly
dared dream of ten years ago (in the '80s AI feeding frenzy) becomes available
at prices I'll wager we simply *didn't* dare dream of ten years ago.

But replacing the brain with a piece of artificial hardware?  No.  Not soon,
not ever, and I think it *does* sound silly.  Whether such articles as
Marvin's really have negative consequences for AI funding I have no idea, but
I think it's a valid concern for Sean to be raising.

-- Scott Burson
