Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!newshost.marcam.com!zip.eecs.umich.edu!newsxfer.itd.umich.edu!agate!news.ucdavis.edu!sunnyboy.water.ca.gov!sunnyboy!nsandhu
From: nsandhu@venice.water.ca.gov (Nicky Sandhu)
Subject: Fitting Criterias, are there any more of them?
Message-ID: <NSANDHU.95Feb10075504@grizzly.water.ca.gov>
Sender: news@sunnyboy.water.ca.gov
Organization: Calif. Dept. of Water Resources
Date: Fri, 10 Feb 1995 15:55:04 GMT
Lines: 20

Hi,

	I have been using NN's for six months now. I have been
observing that SSE is the only criteria used for fitting during
calibration. ( Feed-forward networks used). This worked fine when we
wanted to neglect the low values in the output and concentrate more on
the higher values.

	Now I am faced with another problem. The criteria used to
judge the fit is based on the percentage error at each data point with
respect to the output data point's magnitude.
	i.e. %error = 100* (Target - Model output)/ Target

	Is it possible to use this criteria to train the neural network. 
I have taken the log of the output and this improves performance with
respect to the percentage error criteria. I have a feeling the NN
would do much better if trained with the same criteria as it is being
judged by.
	Any pointers, thoughts and insights are welcome. Thanks
-Nicky
