Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!scramble.lm.com!news.math.psu.edu!news.cse.psu.edu!uwm.edu!vixen.cso.uiuc.edu!newsfeed.internetmci.com!in1.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Q: RPROP Algorithm
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Dr62tF.327@unx.sas.com>
Date: Fri, 10 May 1996 01:58:27 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <4mtuc8$aae@mirv.unsw.edu.au>
Organization: SAS Institute Inc.
Lines: 25


In article <4mtuc8$aae@mirv.unsw.edu.au>, nickt@cse.unsw.edu.au (Nick Treadgold) writes:
|> Hi, I'm implementing the RPROP algorithm based on the algorithm
|> described in the 1993 paper (Braun,Riedmiller) and the 1994 technical
|> report (Riedmiller). I've noticed that there is a slight change in the
|> algorithm between these papers:
|> 
|> The 1993 paper says to take back the previous weight step if a local
|> minimum has been steped over, and reduce the step size.
|> 
|> The 1994 paper says to just reduce the step size.

Not having seen the 1993 paper when I programmed RPROP, I added the
feature of restoring the previous weights when the error function
increased simply because this is the usual sort of thing done in the
numerical analysis literature. After considerable experimentation,
I came to the surprising conclusion that this feature wasn't worth
the bother. I'm afraid I didn't save those results, though, so I
can't provide any further details.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
