Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!cam-news-feed3.bbnplanet.com!news.bbnplanet.com!cam-news-hub1.bbnplanet.com!news.mathworks.com!EU.net!CERN.ch!news
From: john nigel gamble <gamble@dxcoms.cern.ch>
Subject: Re: BackProp convergence
X-Nntp-Posting-Host: dxcoms.cern.ch
Content-Type: text/plain; charset=us-ascii
Message-ID: <32AC29C9.41C6@dxcoms.cern.ch>
Sender: news@news.cern.ch (USENET News System)
Content-Transfer-Encoding: 7bit
Cc: gamble, stephanh@fwi.uva.nl
Organization: CERN European Lab for Particle Physics
Mime-Version: 1.0
Date: Mon, 9 Dec 1996 15:01:29 GMT
X-Mailer: Mozilla 3.01Gold (X11; I; OSF1 V3.2 alpha)
Lines: 30

Thanks for all the replies. I think Stephan's reply is the one
that fits my situation. I can train the 2-2-1 net to any target
rms - its just that the harder I push it - the more the
weights go to the limits. 

>YES, it's normal depending on the implementation choices
>you have made. I assume that your output neuron has an
>activation function like a sigmoid or something. This
>function can have the value ONE or ZERO only if the
>weighted sum of the neuron is infinite or minus infinite.
>If the target value of the XOR-problem are also ONE and
>ZERO, then the RMS can only be reduced by increasing the
>weights.

> Two possible solutions:
>- Make sure that the activation function of the output
>  neuron can have the target value as output for a
>  finite weighted sum. (linear function will do).
>- Change the target values. For instance use 0.1 and 0.9 as
>  target values. The sigmoid can have these values for
>  a finite input.
>
>Hope this will help,
>
>Stephan

I guess this be would be the case for any network
where you have carefully scaled the data to be within [0,1]?.

John.
