Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel-eecis!news.mathworks.com!news-res.gsl.net!news.gsl.net!sgigate.sgi.com!sdd.hp.com!swrinde!cs.utexas.edu!news.sprintlink.net!news-stk-200.sprintlink.net!news.sprintlink.net!new-news.sprintlink.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Q: Small or large weights ?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DtowKK.8Ft@unx.sas.com>
Date: Fri, 28 Jun 1996 03:05:08 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <4qrk8c$96f@eng_ser1.erg.cuhk.hk>
Organization: SAS Institute Inc.
Lines: 30


In article <4qrk8c$96f@eng_ser1.erg.cuhk.hk>, ccszeto@cs.cuhk.hk (Szeto Chi Cheong) writes:
|> If I have two networks
|> (1) small number of large weights
|> (2) large number of small weights
|> 
|> Which one is better ?

Assuming they generalize equally well:
  If you want to compute outputs quickly, (1) is better.
  If you want "graceful degradation", (2) is better.

|> Does the first one correspond to limited degree of freedom and the second 
|> one correspond to limited extent of search space ?

As for degrees of freedom, yes and no. See:

   Moody, J.E. (1992), "The Effective Number of Parameters: An Analysis
   of Generalization and Regularization in Nonlinear Learning Systems",
   Advances in Neural Information Processing Systems 4, 847-854.

As for limited extent of search space, I have no idea what that means.



-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
