Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Statistics vs NN
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cywryq.JuM@unx.sas.com>
Date: Mon, 7 Nov 1994 17:44:50 GMT
Distribution: na
References: <39b9in$of0@omnifest.uwm.edu> <39k43i$3q0@Radon.Stanford.EDU> <39kf8c$mkp@maui.cs.ucla.edu>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 47


In article <39kf8c$mkp@maui.cs.ucla.edu>, edwin@maui.cs.ucla.edu (E. Robert Tisdale) writes:
|> ...
|> There is nothing new or magical about neural networks.

Certainly true wrt to magicalness.

|> Artificial neural
|> networks are sometimes used as ``black box models'' with lots of parameters
|> which can be adjusted to approximate the response of any system provided
|> that there are sufficient data to estimate the parameters accurately.
|> The methods used to adjust these parameters have barely changed at all
|> since they were invented by Karl Friedrich Gauss about 200 years ago.

Well, the basic ideas are pretty much the same, but there have been some
major improvements in implementation. E.g.. the Levenberg-Marquardt
class of algorithms is far more reliable than plain Gauss-Newton. There
have been theoretical advances in areas such as maximum likelihood
estimation and robust estimation, although the NN literature has largely
ignored such statistical theory.

|> Neural network research seems to have contributed almost nothing at all
|> save for jargon and confusion.  Hope this helps, Bob Tisdale.

Jargon and confusion are among the major contributions of the NN
literature, but there are a few things in it that are useful:

 * Multilayer perceptrons are a very useful class of models that were
   apparently never considered in the statistical literature. MLPs are
   especially useful for prediction with high-dimensional inputs and
   are in my opinion a distinct improvement over projection pursuit
   regression because of the ease of computing predictions once the
   network is trained.

 * Recurrent nets appear to add some novel abilities in the area of
   time-series forecasting, although this is not an area that I know
   much about.

 * Stopped training may be useful. At least, it is worth serious
   investigation, as shown in the article I posted on the subject
   some time in the last month or two.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
