Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!satisfied.apocalypse.org!news.mathworks.com!news.duke.edu!agate!hpg30a.csc.cuhk.hk!hkpu01.hkp.hk!ma281209
From: ma281209@hkpu01.hkp.hk (ma)
Subject: Re: Conjugate Gradient method
Message-ID: <D58J9L.LCw@hkpu01.hkp.hk>
Organization: Hong Kong Polytechnic University
X-Newsreader: TIN [version 1.2 PL0]
References: <3jhkod$3sn@eng_ser1.erg.cuhk.hk> <TAP.95Mar7103805@pearson.epi.terryfox.ubc.ca>
Date: Fri, 10 Mar 1995 17:29:44 GMT
Lines: 11


: Following are references for two good books on optimization, and
: various papers about conjugate gradient optimization and neural
: networks.  The conjugate gradient method is usually *much* faster than
: steepest descent (the only exception seems to be when there is much
: redundancy in the training set).  It is not difficult to implement if

I am now use recurrent network to model the value of currency, as I found 
that my program is rather slow, did you think that conjuagate gradient
method can help me (I am using steepest descent now).  Another suggestion ?

