Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!haven.umd.edu!news.umbc.edu!eff!news.duke.edu!news.mathworks.com!newsfeed.internetmci.com!in2.uu.net!orincon.com!news
From: Dopey Dope <jessem>
Subject: Re: Looking for conjugate gradient
X-Nntp-Posting-Host: orcas.orincon.com
Content-Type: text/plain; charset=us-ascii
Message-ID: <DJzz2y.30r.0.-s@orincon.com>
Sender: news@orincon.com (News System)
Content-Transfer-Encoding: 7bit
Organization: Orincon Corporation
References: <4a736b$ajv@goya.eunet.es> <4agrgb$eho@fstgal00.tu-graz.ac.at>
Mime-Version: 1.0
Date: Fri, 22 Dec 1995 16:56:57 GMT
X-Mailer: Mozilla 1.1 (X11; U; SunOS 4.1.3 sun4c)
X-Url: news:4agrgb$eho@fstgal00.tu-graz.ac.at
Lines: 32

GILLETTE@JOANNEUM.ADA.AT (Karine Gillette) wrote:
>In article <4a736b$ajv@goya.eunet.es>, bolsamad@dial.eunet.es says...
>
>>
>>Dear friends,
>>
>>Does anybody know where can I find a well explained conjugate gradient
>>algorithm (by D.F. Shanno, inexact search, BFGS...) for improve the 
>>convergence of my backpropagation NN ?
>>
>
>I read that the conjugate gradient descent was a synonyme of Backpropagation 
>with momentum term.
>
>Can it help you?
>
>Karine
>
>******************************
>*        Karine Gillette     *
>*        Austria/France      *
>* GILLETTE(a)JOANNEUM.ADA.AT *
>******************************
>
I think (and again, I could be wrong) the conjugate gradient descent
is just one way of training in backprop, as opposed to the standard
steepest descent rule you typically see.  Numerical Recipes in C has
an excellent discussion on the use of conjugate gradient.

Hope that helps...
Jesse

