Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!gatech!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Backprop on %error
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D9M00D.Erv@unx.sas.com>
Date: Sat, 3 Jun 1995 18:14:37 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References:  <NSANDHU.95Jun2110405@grizzly.water.ca.gov>
Organization: SAS Institute Inc.
Lines: 30


In article <NSANDHU.95Jun2110405@grizzly.water.ca.gov>, nsandhu@venice.water.ca.gov (Nicky Sandhu) writes:
|>      In my attempt to solve the problem of using the objective
|> funtion to be % error I modified the backprop algo as follows:
|>      Proceeding in a way similar for the techniques used for sum
|> squared error.
|>      Delta^ = Del(error**2)/Del(w)
|> Comment: Del == Partial derivative operator
|> where error = ((target-model)/target)**2
|>      (Note: target != 0)
|>      I recalculated the weight updates using this error terms
|> partial derivatives. I ran the program. It did optimize differently
|> and gave better results when looking at % error.
|>      My question is "Even though its working, is this approach
|> valid or do I have to resort to simulated annealing etc?"

Although I don't quite understand your notation, it appears that you
have reinvented weighted least-squares estimation. See any of numerous
statistics texts such as: 

   Sanford Weisberg (1985), _Applied Linear Regression_, NY: Wiley

   Raymond H. Myers (1986), _Classical and Modern Regression with
      Applications_, Boston: Duxbury Press

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
