Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: NN Vs Stats......
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D2q549.4zJ@unx.sas.com>
Date: Fri, 20 Jan 1995 22:00:09 GMT
References: <1995Jan11.145719.1@ulkyvx.louisville.edu> <3fi9ec$jus@maui.cs.ucla.edu> <3fk4rc$pud@nyx10.cs.du.edu> <D2nqML.7M9@unx.sas.com> <3fo7kt$t0d@nyx10.cs.du.edu>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 21


In article <3fo7kt$t0d@nyx10.cs.du.edu>, abuslik@nyx10.cs.du.edu (Arthur Buslik) writes:
|> Warren Sarle (saswss@hotellng.unx.sas.com) wrote:
|>
|> : As soon as you use least squares for training, you have implicit
|> : distributional assumptions, since least squares is maximum likelihood
|> : for a normal distribution of noise.
|>
|> I do not see this.  There seems to me a fundamental difference between
|> function approximaiton, which does not involve any notion of random
|> variables, and regression analysis.

But in function approximation you still have errors, and a least-squares
training method treats those errors in a manner that would be optimal
in some sense for normally distributed errors with constant variance.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
