Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.kei.com!newsfeed.internetmci.com!info.ucla.edu!agate!hpg30a.csc.cuhk.hk!hkpu01.hkp.hk!usenet
From: "K.R.Tharan" <rd498103 @HKPUCC.POLYU.EDU.HK>
Subject: NN Application in Time Series Forecasting 
Message-ID: <DJ253z.6Lq@hkpu01.polyu.edu.hk>
Sender: usenet@hkpu01.polyu.edu.hk (Usenet Account)
Nntp-Posting-Host: 158.132.101.58
Organization: Hong Kong Polytechnic University
Date: Mon, 4 Dec 1995 10:28:47 GMT
Lines: 73

I'm dealing with wave height prediction using NN; I use Back 
propagation algorithm. I programed the network in FORTRAN; It 
consists of three layers; Input units are eights and both hidden layer
units vary 8 or 16; The outlayer has only one unit. My training set
contains 1952 data which I can input in either 243 sets or 486 sets 
while my testing set has 968 data (or I can say I'm testing for 960 
sets). One-step ahead forecast is considered.
 
I set an RMS 0.05155 and tested with batch operation and Gradient Descent
technique; All three layers can adopt different learning rate & momentum
with different adaptive constants; I received the optimum results in 516
iterations and for the testing set it gives 0.067 RMS error. But, the 
percentage error greater than 30 occurs for 257 data & greater than 100
occurs for 24 data (out of 960); If the layers are two, these are 270 &
27; But, when I decrease the RMS to 0.04, RMS & the other deviations are 
higher. Even the autocorrelation is in its best for this RMS,0.05155.
If I increase the number of input sets from 243 to 486, my results are
worse than those of 243 data set for the same RMS; But, the max and the
min of the data is very close to the two layer NN.
 
My questions are,
1. As the errors are high at the turning points (ie, when a typhoon
starts) and which attributes most of the error, are there any alternatives
 to reduce them (or better turning point evaluation)?

2. Is it possible for a batch operation to converge to a RMS 0.05
with 600 iterations with an assumption that it is not a local minima?
Because, I'm afraid that I could have made mistakes somewhere.
3.As the data is not distributed normally (it follows a Log- Normal
distribution), does it have anything do with the NN?

4.How can I statically verify the resulting time series? (as a matter
of fact, the mean of the original test series can be acheived with 
different values of RMS around 0.05 and for all the SD is less than the 
original; Skewness & Kurtosis are higher though the ratio is almost the 
same; I divided the original series into four groups (and  the predicted
one too); Then their respective sets results reflect the same characters
stated above; 

5.How to decide the allowable percentage of error?

6.Do I have to treat seasonality & trends seperately before going for
the NN or can they also be analysed through the NN?

7.I think to keep a dual NN setup which can be selected according to the 
sudden change during the training (such as when the consequent data
vary by a fixed percentage, let the current NN to swich to the other);
Is it possible?

Suggestions and comments are highly appreciated personally or through this 
news group.

Further, if anyone utilises/utilised NN in Finite difference applications, please
let me know.


Thanks in advance.


With regards,
Tharan.



"Kanna,Suresh, I stopped postings to the SCT, not due to my semester
exams (actually, I have no such need), but because of you sort of guys
sickening postings; I replied u, but u as usual gave wrong address; For
pseudoname, go and hang with your buddy, LEO; But, Ezham is sure, though
I'm not a LTTE supporter" 
                                             -Cyberb(y)ite Begger.       
       
 
  
