Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!fas-news.harvard.edu!newspump.wustl.edu!trinews.sbc.com!news.mid.net!news.ksu.ksu.edu!vixen.cso.uiuc.edu!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: NN Vs Stats......
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D2zErG.Hqx@unx.sas.com>
Date: Wed, 25 Jan 1995 22:06:52 GMT
References: <1995Jan11.145719.1@ulkyvx.louisville.edu> <3fi9ec$jus@maui.cs.ucla.edu> <3fk4rc$pud@nyx10.cs.du.edu> <xg4Y5Lg.predictor@delphi.com> <3g2g2e$6p0@linuxguy.pku.edu.cn> <3g602q$hv5@vixen.cso.uiuc.edu>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 67


In article <3g602q$hv5@vixen.cso.uiuc.edu>, srjg7930@uxa.cso.uiuc.edu (johnson s) writes:
|> Wait a minute.   Statistical analyses cannot recover a pattern which has
|> missing parts or which is geometrically distorted (inverted, rotated, etc.)
|> while neural networks can.

Can you provide an example of a neural net that can do the above and is
not a statistical model?

|> I think this is should indicate that statistical
|> methods and neural net methods overlap, they do form distinct approaches.

Neural nets that are not used for data analysis are obviously not
statistical methods. But most neural nets that do supervised or
unsupervised learning for the purpose of data analysis _are_ statistical
models, as is discussed in various articles for which I have posted
references in this thread. The exceptions would mostly be methods that
work only with noise-free data. For example, the many variations on ART
do have some similarity to clustering algorithms, but since they do not
yield statistically consistent estimates with noisy data, no
statistician would be offended if they were to be declared
non-statistical.

|> Additionally, aren't most of the examples given in this thread all BP
|> type networks?  I mean don't Kohonen and ART networks (and others) not
|> use mean error gradiant type of learning?  And doesn't that exempt them
|> from being classified as similar to statistical regression?

Whether something uses a gradient descent form of learning is utterly
irrelevant to whether it is a statistical method. While neural nets
are often defined in terms of their algorithms or implementations,
statistical methods are defined in terms of their results. The
arithmetic mean, for example, can be computed by a (very simple)
backprop net, by applying the usual formula SUM(x_i)/n, or by
various other methods. What you get is still an arithmetic mean
regardless of how you got there.

Basic Kohonen networks (not LVQ or SOM) are very similar to k-means
clustering. If the activation of the output nodes is determined by
Euclidean distance rather than the more usual scalar product, then a
Kohonen network is really an alternative algorithm for k-means
clustering. 

A simulation study comparing various Kohonen networks (Neuralworks
Professional II) with a k-means clustering program (FASTCLUS in the
SAS/STAT product was done by Balakrishnan, P.V., Cooper, M.C., Jacob,
V.S., and Lewis, P.A. (1994) "A study of the classification capabilities
of neural networks using unsupervised learning: A comparison with
k-means clustering", Psychometrika, 59, 509-525.  I recommend that
neural net researchers read this article as an example of how to do
simulations. The study varied the number of clusters, the number of
inputs, and the noise level. The clustering program produced fewer
classification errors than any of the Kohonen nets for all combinations
of the experimental factors. The error rate for the clustering program
was sensitive to the amount of noise, as one would expect, but only
slightly sensitive to the number of inputs (the more, the better) and
not sensitive to the number of clusters. The error rates for the Kohonen
nets were most sensitive to the number of clusters, with considerably
less sensitivity to the number of inputs or the noise level (very
strange!). Over all, the Kohonen nets made almost 10 times as many
errors as the clustering program.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
