Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!EU.net!nova.puug.pt!ciup2.ncc.up.pt!news.uminho.pt!caeiro!si508957
From: si508957@caeiro.ci.uminho.pt (paulo alexandre ribeiro cortez)
Subject: Q: Time Series Forecasting
Message-ID: <1995Jan3.140247.19032@news.uminho.pt>
Sender: newsadm@news.uminho.pt (Network News Account)
Organization: Universidade do Minho, Braga, Portugal
Date: Tue, 3 Jan 1995 14:02:47 GMT
Lines: 38


  We are developing a project of Times Series Forecasting using Neural 
Networks, with the backpropagation algorithm, sigmoid function and with 
one hidden layer.
  The training is done by dividing the series by a constant (greater 
than the highest value).
  
  Simple example:

           time series: 5, 10, 15,  20, 25
           constant = 50

           feedforward network: input layer > hidden layer > output layer 
                                    2            2              1

           training cases:  0.1, 0.2 ---> 0.3
                            0.2, 0.3 ---> 0.4
                            0.3, 0.4 ---> 0.5 

  PROBLEMS:
  
  -  we are getting better results using the BOX-Jenkins methodology 
or the Holt-Winters method.
  
  - we can't reach to any relation between the MSE(Mean Square Error)
 of the neural network and the MSE in the forecasting.

  - after some thousand iterations the network still converges but we
 get worst results in the forecast.Is this some kind of Overfitting? 
 When do we have to stop?

  Does anyone knows what is the better aproach in the use of Neural 
Networks for Time Series Forecasting?  

  PLEASE: Send answers to E-mail: si508957@ci.uminho.pt

-- 

