Newsgroups: comp.ai.neural-nets,comp.speech
Path: lyra.csx.cam.ac.uk!pipex!bnr.co.uk!corpgate!news.utdallas.edu!chpc.utexas.edu!cs.utexas.edu!convex!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: INITIALIZATION of a Perceptron
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cu03Eo.I2H@unx.sas.com>
Cc: sasheei@arthur.st.nepean.uws.edu.au
Date: Thu, 4 Aug 1994 07:56:48 GMT
References:  <sasheei.775908578@lancelot.st.nepean.uws.edu.au>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Keywords: saturation, SLP, MLP
Lines: 20
Xref: lyra.csx.cam.ac.uk comp.ai.neural-nets:11136 comp.speech:3001


In article <sasheei.775908578@lancelot.st.nepean.uws.edu.au>,
sasheei@arthur.st.nepean.uws.edu.au (M Saseetharran) writes:
|> ...
|> I understand that small weights overcome saturation at initialization.

True.

|> However, too small weights could approach 0.0 and therefore training
|> will not take place.

Only if you are using a learning-challenged training method such as
standard backprop, or if most of the weights are exactly zero.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
