Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!uunet!in1.uu.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Feature selection
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DnEp06.IA4@unx.sas.com>
Date: Mon, 26 Feb 1996 23:24:06 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <4g2blf$rk@nef.ens.fr> <674502343wnr@ecowar.demon.co.uk>
Organization: SAS Institute Inc.
Lines: 25


In article <674502343wnr@ecowar.demon.co.uk>, "C:INTERNETSPOOLMAIL" <drago@ecowar.demon.co.uk> writes:
|> In article: <4g2blf$rk@nef.ens.fr>  rossi@drakkar.ens.fr (Fabrice Rossi) 
|> writes:
|> >  So the PCA reduce the input dimenion of the classifier but do
|> > not select attributes.
|> 
|> Well, this is a problem for *variable (subset) selection*. But
|> there are no hard guidelines to follow (statisticians would ask
|> you to use *all* variables in ridge regression) and rarely any
|> papers.

It is true that ridge regression usually provides better generalization
than subset selection, but there has indeed been a lot of work in the
latter area. See:

  Miller, A.J. (1990), Subset Selection in Regression, Chapman & Hall.

  Hjorth, J.S.U. (1994), Computer Intensive Statistical Methods:
  Validation, Model Selection, and Bootstrap, London: Chapman & Hall.
-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
