Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!fs7.ece.cmu.edu!hudson.lm.com!godot.cc.duq.edu!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Multicollinearity
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Cyyzzr.IIw@unx.sas.com>
Date: Tue, 8 Nov 1994 22:33:27 GMT
References:  <reyden.54.2EB780C3@ebs.up.ac.za>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 22


In article <reyden.54.2EB780C3@ebs.up.ac.za>, reyden@ebs.up.ac.za (Robert.Van Eyden) writes:
|>
|> Does anybody know, if their has been any research or articles done that
|> conclusively indicate whether or not neural nets overcome the problems of
|> multicollinearity?

Multicollinearity (a common statistical term) or ill-conditioning (a
slightly more general numerical analysis term) are ubiquitous problems
in training neural nets. In terms of numerical efficiency, neural net
training methods (i.e. variants of steepest descent) are far more
sensitive to ill-conditioning than the 2nd-order methods commonly used
for statistical estimation.  In terms of statistical efficiency, the
usual sorts or regularization are applicable, just as they are in linear
models. Neural nets are not capable of "overcoming" the problem of
multicollinearity anymore than are linear models.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
