From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!wupost!darwin.sura.net!jvnc.net!nuscc!hilbert!smoliar Tue Nov 26 12:31:06 EST 1991
Article 1458 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:1446 comp.ai.philosophy:1458
Newsgroups: comp.ai,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!wupost!darwin.sura.net!jvnc.net!nuscc!hilbert!smoliar
>From: smoliar@hilbert.iss.nus.sg (stephen smoliar)
Subject: Re: An open letter to Marvin Minsky
Message-ID: <1991Nov21.090006.8493@nuscc.nus.sg>
Keywords: Minsky neural network
Sender: usenet@nuscc.nus.sg
Organization: Institute of Systems Science, NUS, Singapore
References: <1991Nov21.041117.6563@unlinfo.unl.edu> <1991Nov21.052028.19231@cs.umn.edu>
Distribution: na
Date: Thu, 21 Nov 1991 09:00:06 GMT

First of all, I am cross-posting this to comp.ai.philosophy.  I have seen
Minsky post there, but I have never seen him post on comp.ai.  If you really
want him to respond, it is best to post where he seems to be reading.  Now to
the matter at hand:

In article <1991Nov21.052028.19231@cs.umn.edu> lsmith@cs.umn.edu (Lance
"Squiddie" Smith) writes:
>In article <1991Nov21.041117.6563@unlinfo.unl.edu> wdye@cse.unl.edu (William
>Dye) writes:
>
>>From others interested in AI, I heard a story regarding 
>>you and early neural net research.  As the story goes, you 
>>published a critical paper on neural nets, back in the early 
>>days of neural net research.  The paper essentially proved 
>>that a network with very few nodes was incapable of performing 
>>simple arithmetic functions.  
>
>The book is PERCEPTRONS by Minksy and Seymour Papert [1969].
>
>Perceptrons are essentially single-layer neural nets. The book showed the
>limitation of these nets. Mainly that perceptrons couldn't determine if
>a figure was connected or not. (Close?)

Minsky ought to make the final call;  but I would certainly say "close."  The
basic result is that there are certain patterns which are linearly separable
and that those were the only patterns which a single-layer net could learn to
distinguish.  As I recall, Minsky and Papert observed that adding a layer
increased the power of the net but also observed that they had not been able
to come up with a suitable technique for training a net with more than one
layer.  That training problem remained unsolved until back-propagation came
along.  As to whether or not the publication of PERCEPTRONS was the cause for
lacking of funding of net research at that time, I shall let Minsky offer his
own comments.
-- 
Stephen W. Smoliar; Institute of Systems Science
National University of Singapore; Heng Mui Keng Terrace
Kent Ridge, SINGAPORE 0511
Internet:  smoliar@iss.nus.sg


