Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!newstand.syr.edu!news.maxwell.syr.edu!EU.net!sun4nl!surfnet.nl!swidir.switch.ch!CERN.ch!news
From: john nigel gamble <gamble@dxcoms.cern.ch>
Subject: BackProp convergence
X-Nntp-Posting-Host: dxcoms.cern.ch
Content-Type: text/plain; charset=us-ascii
Message-ID: <32A6A1DE.41C6@dxcoms.cern.ch>
Sender: news@news.cern.ch (USENET News System)
Content-Transfer-Encoding: 7bit
Cc: gamble@dxcoms.cern.ch
Organization: CERN European Lab for Particle Physics
Mime-Version: 1.0
Date: Thu, 5 Dec 1996 10:20:14 GMT
X-Mailer: Mozilla 2.02 (X11; I; OSF1 V3.2 alpha)
Lines: 20

Can anyone help with their experience.

I was playing with a simple 2-2-1 NN for the Xor
problem. Adjacent layers fully connected. I noticed the following:-

If I continue to train the network the rms error keeps
improving (goes to 1 e-13 and better) and the weights go to
weight limits that I have imposed (i.e. +-20) rather than
stabilising on some other "balanced" value.
Its the weight values that bother me.

Question: Is this "normal" or do I have
an implementation problem in the algorithm.
[the algorithm seems to work fine for other nets
but is sensitive to the initial weight values].

A YES/NO answer will do (yes=normal) but more will
be welcome.

John.
