Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!godot.cc.duq.edu!newsgate.duke.edu!news.mathworks.com!nntp.primenet.com!uunet!inXS.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Input Variable Contribution
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DuFMHG.C3u@unx.sas.com>
Date: Fri, 12 Jul 1996 13:22:28 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <31C1A2EE.274E@wvu.edu> <31C3B219.4DE0@cs.ucdavis.edu> <4rvd6c$eas@llnews.ll.mit.edu> <DuEKzM.BHC@unx.sas.com>
Organization: SAS Institute Inc.
Lines: 23


In article <DuEKzM.BHC@unx.sas.com>, saswss@hotellng.unx.sas.com (Warren Sarle) writes:
|> ...
|> If we are using a nonlinear neural net instead of a linear model, we
|> can't simply look at the correlations among inputs to see if some inputs
|> can compensate when others are omitted from the network.  Since NNs are
|> often used with a large number of redundant inputs, it is often the case
|> that some degree of compensation will occur.  But the only way to tell
|> is to retrain.

A Wald test statistic, which is what "optimal brain surgeon" computes,
will give you an approximation to the error of the retrained network.
But this amounts to taking one step of a Newton algorithm and may not be
accurate.

   Hassibi, B. and Stork, D. G. (1993), "Second order derivatives for
   network pruning: Optimal brain surgeon", NIPS 5, 164-171.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
