Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!newstand.syr.edu!news.maxwell.syr.edu!www.nntp.primenet.com!nntp.primenet.com!howland.erols.net!cs.utexas.edu!bcm.tmc.edu!newshost.convex.com!newsgate.duke.edu!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: BackProp convergence
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <E28Es9.ILB@unx.sas.com>
Date: Wed, 11 Dec 1996 04:21:45 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <32A6A1DE.41C6@dxcoms.cern.ch> <58aure$bi6@gilnt2.ipswich.gil.com.au> <32AB4011.6904@lex.infi.net>
Organization: SAS Institute Inc.
Lines: 30


In article <32AB4011.6904@lex.infi.net>, JPEYTON <jpeyton@lex.infi.net> writes:
|> rotont@siml.com.au wrote:
|> 
|> > You can expect this from the basic Backprop algorithm.
|> > 
|> > You should try the Marquartd Levenberg algorithm which is a second
|> > order process which is faster, generally overcomes local minima and
|> > nearly guarantees convergence.
...
|>  Could you tell me (us) more about the the Marquartd Levenberg algorithm
|> . I'd be really interested in an algorithm that's less finicky than the
|> plain vanilla backprop model.

See "What are conjugate gradients, Levenberg-Marquardt, etc.?" 
in the Neural Network FAQ, part 2 of 7: Learning, at
ftp://ftp.sas.com/pub/neural/FAQ2.html

Note that Levenberg-Marquardt is a local optimization method. I would
agree, based on my experience, that it is less prone to local optima
than standard batch backprop, but there are no guarantees.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
 *** Do not send me unsolicited commercial or political email! ***

