Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!fs7.ece.cmu.edu!hudson.lm.com!news.pop.psu.edu!news.cac.psu.edu!howland.reston.ans.net!gatech!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Degrees of freedom in a net
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D0M491.EME@unx.sas.com>
Date: Sat, 10 Dec 1994 20:43:49 GMT
References: <Cz294r.LJt@cs.dal.ca> <9QJUBJCC@venus.nbg.sub.org> <CzMs0o.K39@unx.sas.com> <X4UUB7DB@venus.nbg.sub.org> <D03Cyz.C8D@unx.sas.com> <5Z3UB71C@venus.nbg.sub.org>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 27


In article <5Z3UB71C@venus.nbg.sub.org>, alex@venus.nbg.sub.org (Alexander Adolf) writes:
|> Warren Sarle (saswss@hotellng.unx.sas.com) wrote:
|> ..
|> : In a nonlinear model, the above formulas may hold approximately, but
|> : there is no known exact formula for DF, and different uses, such as MSE
|> : and FPE, may require different formulas. One approach to computing DF in
|> : neural networks for a different method of estimating generalization
|> : error is given in Moody, J.E. (1992), "The Effective Number of
|> : Parameters: An Analysis of Generalization and Regularization in
|> : Nonlinear Learning Systems", NIPS 4, 847-854.
|>
|> Hm. Propably should have a look at that. But with all the
|> different architectures (means combinations of topology, neuron model
|> and learning algorithm), could we expect such a computation to yield
|> generally usable results?

That's an open question, although learning algorithm should make no
difference as long as it converges and you make a reasonable effort
to avoid local minima.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
