Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Results of NN Challenge:  NN versus Multiple Reg
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D8L3F7.4Jo@unx.sas.com>
Date: Sun, 14 May 1995 19:57:07 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <3p3d3n$p90@newsbf02.news.aol.com>
Organization: SAS Institute Inc.
Lines: 59


In article <3p3d3n$p90@newsbf02.news.aol.com>, dcacda@aol.com (DCACDA) writes:
|> ...
|> Third Column:  Neural Net Normalized Effect

What is this?

|> QUESTION 2:  How well can we predict an employees commitment knowing
|> his/her scores on the predictor variables?
|>
|> The amount of variance explained was:
|>
|>                 R-square
|>
|> Regression:      .34716
|> Neural network   .42639
|>
|> It appears that the neural network identified a better combination of
|> weights to predict commitment than the regression approach.  The NN
|> approach explains 23% more variance than a regression approach, which is
|> quite difference.
|>
|> Looks like the neural net wins!!

Now hold on a minute. There's a lot of crucial information that's been
left out. How are these R^2s computed? Are they based on the training
data, test data, bootstrapping, ...?  Were any regularization methods
used?

Using the usual least-squares training methods, a neural net will
typically get a higher R^2 than linear regression simply because the
net has more weights than the linear regression. There is also the
question of how many hidden units were used. If you tried several
different nets with different numbers of hidden units and picked the
one with highest R^2, that would also inflate the R^2 value.

|> Based on this comparison, Ive tentatively concluded that both approaches
|> can find the salient variables equally well, but neural nets do a better
|> job of assigning weights to the variables (i.e. a more predictive linear
|> equation).  Is this statement fair?

Absolutely not! Linear regression generally does a better job of
assigning weights because analytical solutions exist in the linear
case--no iterations or fooling around with learning rates are required,
and there are no bad local optima.

Neural nets with hidden layers are _not_ linear models. Their strength
comes from allowing nonlinearities in a very flexible way and allowing
convenient control of the complexity of the model. To take advantage of
these strengths, you have to train them properly, and this post gives us
no idea of how the nets were trained, or even what architecture was
used. Please fill in the details!


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
