Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!godot.cc.duq.edu!newsgate.duke.edu!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Input Variable Contribution
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Du1p6J.6qn@unx.sas.com>
Date: Fri, 5 Jul 1996 00:54:19 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <31C1A2EE.274E@wvu.edu> <31C3B219.4DE0@cs.ucdavis.edu> <31C6F5B7.135C@mail.ddnet.es> <4qiojn$3q@sjx-ixn3.ix.netcom.com> <biocomp.22.02BE9E80@biocomp.seanet.com> <4qtj5j$i6u@yama.mcc.ac.uk> <biocomp.25.08732EC0@biocomp.seanet.com> <4r6rl8$hv0@s <biocomp.28.00422A80@biocomp.seanet.com>
Organization: SAS Institute Inc.
Lines: 40


In article <biocomp.28.00422A80@biocomp.seanet.com>, biocomp@biocomp.seanet.com (Carl M. Cook) writes:
|> NNTP-Posting-Host: biocomp.seanet.com
|> X-Newsreader: Trumpet for Windows [Version 1.0 Rev B final beta #4]
|> 
|> In article <Dtvx2G.8zo@unx.sas.com> saswss@hotellng.unx.sas.com (Warren Sarle) writes:
|> [substantial snip]
|> >weights.  In neural nets, you can estimate the proportional change in
|> >the generalization error when the input is omitted from the model by
|> >retraining the network with that input omitted.
|> 
|> This is true if you initially trained and retrained enough times, particularly 
|> with BP, to average out the effect of initial weight randomization.  We've run 
|> quite a few models where building the same model over and over with different 
|> (random) initial weights can give substantially different results, even with 
|> the same input variables.

That's why it's important to use a global optimization method (the
simplest of which is just to retrain multiple times as Carl says) or to
use lots of hidden units.

|> You could easily be mis-led about the results 
|> you get when leaving inputs out and retraining.  On networks like GRNN, PNN, 
|> etc., of course, this isn't an issue.  

Right.

|> We've automated this process in version 
|> 2.1 of the NGO so you can load a network and retraining N times and find the 
|> best network.

I'm confused. Genetic algorithms are supposed to be global optimizers
(at least if you tweak them properly), and the "G" in NGO is for
"genetic", so why would NGO need to do this?

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
