Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!perkin-elmer.com!holdendp.abd.perkin-elmer.com!user
From: holdendp@perkin-elmer.com (Dave Holden)
Subject: BP question: Epoch vs. every iteration training
X-Nntp-Posting-Host: 167.116.106.189
Message-ID: <holdendp-150596091820@holdendp.abd.perkin-elmer.com>
Followup-To: comp.ai.neural-nets
Sender: usenet@netlink.perkin-elmer.com
Organization: Applied Biosystems
Date: Wed, 15 May 1996 16:18:20 GMT
Lines: 15

Hi,

I wrote a vanilla BP and have traditionally trained my weights on every
iteration.  I added a mechanism to give each weight it's own learning rate
based on whether or not it was bouncing between sides of a well (ie. if
thrashing, lower learning rate, if constant descent, raise learning rate). 
Both mechanisms work fine  alone but together it's very unstable, keeps
jumping way out of minima.  I got the latter trick from a book by Murray
Smith called "NN for Statistical Modeling".  

Has anyone used these techniques together successfully.  Smith says this is
his network of choice.  

Thanks,
Dave Holden
