Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.cis.ohio-state.edu!math.ohio-state.edu!howland.reston.ans.net!newsfeed.internetmci.com!in1.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Q: RPROP vs LM/SCG?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DsnKJ0.LwI@unx.sas.com>
Date: Fri, 7 Jun 1996 23:13:48 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <4p85on$54g@mirv.unsw.edu.au>
Organization: SAS Institute Inc.
Lines: 16


In article <4p85on$54g@mirv.unsw.edu.au>, nickt@cse.unsw.edu.au (Nick Treadgold) writes:
|>      I've been looking at RPROP and I'm wondering how does RPROP 
|> compare to other training algorithms such as Quasi Newton,
|> Levenberg-Marquardt and Conjugate Gradient Methods (in cases where
|> there are small, moderate and large number of network weights)?

I have found RPROP and Quickprop to be faster than the other
algorithms mentioned only in networks with many more weights
than training cases, such as the 10-5-10 encoder.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
