Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!godot.cc.duq.edu!news.duke.edu!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!perkin-elmer.com!holdendp.abd.perkin-elmer.com!user
From: holdendp@perkin-elmer.com (Dave Holden)
Subject: Re: BP question: Epoch vs. every iteration training
X-Nntp-Posting-Host: 167.116.106.189
Message-ID: <holdendp-150596110005@holdendp.abd.perkin-elmer.com>
Followup-To: comp.ai.neural-nets
Sender: usenet@netlink.perkin-elmer.com
Organization: Applied Biosystems
References: <holdendp-150596091820@holdendp.abd.perkin-elmer.com>
Date: Wed, 15 May 1996 18:00:05 GMT
Lines: 23

In article <holdendp-150596091820@holdendp.abd.perkin-elmer.com>,
holdendp@perkin-elmer.com (Dave Holden) wrote:

> Hi,
> 
> I wrote a vanilla BP and have traditionally trained my weights on every
> iteration.  

Forgot to say here, now i'm trying to train every epoch, not every
iteration.  

>I added a mechanism to give each weight it's own learning rate
> based on whether or not it was bouncing between sides of a well (ie. if
> thrashing, lower learning rate, if constant descent, raise learning rate). 
> Both mechanisms work fine  alone but together it's very unstable, keeps
> jumping way out of minima.  I got the latter trick from a book by Murray
> Smith called "NN for Statistical Modeling".  
> 
> Has anyone used these techniques together successfully.  Smith says this is
> his network of choice.  
> 
> Thanks,
> Dave Holden
