Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!bloom-beacon.mit.edu!uhog.mit.edu!news.mathworks.com!hookup!news.moneng.mei.com!howland.reston.ans.net!cs.utexas.edu!uunet!newsflash.concordia.ca!nstn.ns.ca!cs.dal.ca!cfn.cs.dal.ca!ab340
From: ab340@cfn.cs.dal.ca (John Shimeld)
Subject: Degrees of freedom in a net
Message-ID: <Cz294r.LJt@cs.dal.ca>
Sender: usenet@cs.dal.ca (USENET News)
Nntp-Posting-Host: cfn.cs.dal.ca
Organization: Chebucto FreeNet
X-Newsreader: TIN [version 1.2 PL2]
Date: Thu, 10 Nov 1994 16:43:36 GMT
Lines: 24



I'm wondering how to determine the number of degrees of freedom
in a neural network.  Consider a simple backpropagation net with 
three inputs, a single hidden layer of four nodes, and a one output.
The input layer node is connected to each hidden layer node, and 
each hidden layer node is connected to the output.  The hidden layer
nodes are not interconnected.  In my simple minded manner, I would 
say that the number of degrees of freedom is equal to the total number 
of weights in the net. That is, f = 3x4+4 = 16. 
 
A problem is that each weight is really not an independent parameter.
Is it feasibly possible, then, to determine f?  Also, once the net 
is trained, it may be discovered that some of the connection weights 
are very close to zero.  Theoretically, these connections could be 
removed with little effect on the output value produced by the trained 
net.  Thus, it is possible to design a new network topology with fewer 
degrees of freedom that produces output values comparable to the 
original net.
 
Does anyone care to comment, or to provide a more knowledgable discussion
of this topic?  I'd be grateful for any responses.
 
ab340@cfn.cs.dal.ca
