Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!redstone.interpath.net!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: linear separable boolean functions -- lists?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <D73L99.EK9@unx.sas.com>
Date: Sat, 15 Apr 1995 22:31:57 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <3makuv$jng@agate.berkeley.edu> <797756360snz@longley.demon.co.uk>
Organization: SAS Institute Inc.
Lines: 21


In article <797756360snz@longley.demon.co.uk>, David@longley.demon.co.uk (David Longley) writes:
|> ...
|> Just as an aside, a colleague of mine (I'm a psychologist) suggested that the
|> non-linearities which so many neural-net folk make a fuss of, may just be the
|> interaction terms in regression or other statistical analyses. Any comments
|> anyone?

XOR is a 2-way interaction in statistical terminology. Polynomial models
with interactions (i.e. products of inputs and powers thereof) are
universal approximators, as are multilayer perceptrons. Polynomial
models are easier to train, being linear in the weights, but the number
of weights increases exponentially with the number of inputs.  Thus,
MLPs tend to be more convenient and flexible when you have many inputs,
especially when some of the inputs are not really useful predictors.

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
