Newsgroups: sci.math.num-analysis,comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.kei.com!ddsw1!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Question: gradient descent
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D84Jos.2ry@unx.sas.com>
Date: Fri, 5 May 1995 21:29:16 GMT
Distribution: inet
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <3o6ck7$6fk@rover.ucs.ualberta.ca> <D812BB.A7s@unx.sas.com> <3oc57p$rfa@rover.ucs.ualberta.ca> <3odoqi$ese@sunserver.lrz-muenchen.de> <3oe12k$k88@rover.ucs.ualberta.ca>
Organization: SAS Institute Inc.
Lines: 23
Xref: glinda.oz.cs.cmu.edu sci.math.num-analysis:20618 comp.ai.neural-nets:23881


In article <3oe12k$k88@rover.ucs.ualberta.ca>, fwang@ucs.ualberta.ca (Feng Wang) writes:
|> ta131ab@sun5.lrz-muenchen.de (Thomas F. Enders) writes:
|> ...
|> >Actually, it _is_ practical to keep a different learning rate for each
|> >weight. RPROP does something like that by keeping an adaptive value for
|> >the size of the next weight change for each weight. As Warren Sarle
|> >already mentioned, this and other adaptive methods work quite well:
|> [...]
|>
|> Could you tell me where I can get the reference or software about
|> RPROP?

   Riedmiller, M. and Braun, H. (1993), "A Direct Adaptive Method for
   Faster Backpropagation Learning: The RPROP Algorithm", Proceedings
   of the IEEE International Conference on Neural Networks 1993, San
   Francisco: IEEE.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
