From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Thu Dec 26 23:58:02 EST 1991
Article 2358 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Are we scaled-up slug-brains or not? (was "In the news...")
Message-ID: <1991Dec22.010502.26831@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <45307@mimsy.umd.edu> <45328@mimsy.umd.edu>
Date: Sun, 22 Dec 1991 01:05:02 GMT
Lines: 52

In article <45328@mimsy.umd.edu> kohout@cs.umd.edu (Robert Kohout) writes:

>Let me be the first of what may be a hoard of posters to correct this
>blunder. You may have a distorted sense of godfathering, but Minsky
>& Papert's "Perceptrons" did not introduce the buggers in any sense,
>and did a good deal towards killing off research in the field for 20
>year. (In all fairness, it's most likely that the lack of adequate
>computational resources had the field at the edge of a cliff, and
>all it took was a little push from "Perceptrons". )
>
>Bob Kohout

In all fairness, maybe you should read Perceptrons. We did complain in
the book that there was no efficient algorithm for guaranteed
convergence for multilayer nets.  I think this is still true.  It is
false, although most people seem to assume it, that because we
discussed only 3-layer nets (not 1-layer nets, as most also seem to
assume) the theorems in the book do not hold for multilayer loop free
nets.  The book was about loop free nets, so if you didn't invent
Hopfiled nets 20 years ago, that's your fault, not mine.  By the way,
I *did* apparently invent reinforcement-based neural net learning
machines, and long before my high school classmate Rosenblatt did;
they're in my 1954 Ph.D thesis.  

*Flame on*

By the way, the main point of Perceptrons was to argue that it is not
enough to present anecdotal examples of successes, but one should also
try to understand which problems cannot be solved with loop-free nets.
Most of our "order-limited" theorems apply to deeper nets, only
somethimes you have to replace the N's in the theorems with fractional
powers of N for more layers.  The point then is, when someone reports
that Problem X was solved in time N by an NN with K inner units, you
want to know how K varies with N.  If it doesn't vary much, then you
can conclude that the problem was of relatively "low order" after all.
If you thought it was a higher order problem, then, you were wrong and
you learned something.  It doesn't necessarily mean that me and Papert
were wrong.  Why do I have to explain this again?  And the rumors
appear to be false that such NN can learn parity, etc., efficiently,
despite the published reports about same.

*flame off*

But I agree, as Chalmers pointed out the other day, the remarks in
Chapter 13 should have been better qualified.

So please read Perceptrons, My thesis, and Society of Mind.  And
please stop reading newspapers!  That's for the public, not for super
specialized experts like the readers of this newsgroup.

P.S.  Maybe you'd better not read my thesis.  I'd guess that there are
only a few ideas in it that are not already out there.


