Newsgroups: comp.ai.neural-nets
From: David@longley.demon.co.uk (David Longley)
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!howland.reston.ans.net!pipex!peernews.demon.co.uk!longley.demon.co.uk!David
Subject: Re: linear separable boolean functions -- lists?
References: <3makuv$jng@agate.berkeley.edu> <797756360snz@longley.demon.co.uk> <D73L99.EK9@unx.sas.com>
Organization: Myorganisation
Reply-To: David@longley.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.29
Lines: 47
X-Posting-Host: longley.demon.co.uk
Date: Tue, 18 Apr 1995 15:27:42 +0000
Message-ID: <798218862snz@longley.demon.co.uk>
Sender: usenet@demon.co.uk

In article <D73L99.EK9@unx.sas.com>
           saswss@hotellng.unx.sas.com "Warren Sarle" writes:
> 
> In article <797756360snz@longley.demon.co.uk>, David@longley.demon.co.uk (David
>  Longley) writes:
> |> ...
> |> Just as an aside, a colleague of mine (I'm a psychologist) suggested that the> |> non-linearities which so many neural-net folk make a fuss of, may just be the> |> interaction terms in regression or other statistical analyses. Any comments
> |> anyone?
> 
> XOR is a 2-way interaction in statistical terminology. Polynomial models
> with interactions (i.e. products of inputs and powers thereof) are
> universal approximators, as are multilayer perceptrons. Polynomial
> models are easier to train, being linear in the weights, but the number
> of weights increases exponentially with the number of inputs.  Thus,
> MLPs tend to be more convenient and flexible when you have many inputs,
> especially when some of the inputs are not really useful predictors.
> 
> -- 
> 
> Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
> saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
> (919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
> 

Thankyou. I would welcome further elaboration on this. As written, the parallel
seems to be hierarchical loglinear modelling. I had thought that the closest
parallel was just logistic regression. However, isn't there something to be 
said for the *statistical* basis of statistical models whilst neural nets just
fit function approximators and are therefore prone to over-fit? Isn't building
a neural net model like doing a 'direct' method in regression with all ones
main variables and evry combination possible thrown in on top. At least in a 
stepwise regression variables which meet some statistical level of significance
are entered. 

I'm very interested in the parallels between classical statistics and neural
network modelling, so any light you can throw, directions you can point me
in, I'd be very grateful.

PS. 

After holding SAS and SPSS licences for the PC for several years, we settled
on SPSS, mainly for teaching purposes. SAS is clearly a better product, but it
doesn't hold a neophyte's hand much!!. Both SPSS and SAS would be improved 
greately if they expanded their documentation to explain the basics of
statistical analysis and testing I think.  
- 
David Longley
