Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.duke.edu!concert!sas!mozart.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Why hidden layer (not VERY stupid Q)
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <CyDB5H.AEu@unx.sas.com>
Date: Fri, 28 Oct 1994 05:27:17 GMT
References: <v9110104-251094120955@igwemc25.vub.ac.be> <38p0bg$5sp@st-james.comp.vuw.ac.nz>
Nntp-Posting-Host: hotellng.unx.sas.com
Organization: SAS Institute Inc.
Lines: 23


In article <38p0bg$5sp@st-james.comp.vuw.ac.nz>, hume@gphs.vuw.ac.nz (Tim Hume) writes:
|> ...
|> I have been using neural networks and when I tried training the network with
|> no hidden layers it just wouldn't train. The NN used back propagation and had
|> 15 inputs and 1 output. My guess is that hidden layers allow the nets to find
|> relationships between the inputs and outputs which are more non linear than
|> with out the hidden layers, anyhow hidden layers seemed necessary for successful
|> training (at least in my case).

It is true that at least one hidden layer with a nonlinear activation
function is needed for learning general nonlinear relationships. But
a net with no hidden layers is quite capable of learning linear
relationships (or "generalized" linear relationships if you use a
nonlinear activation function on the output units) if implemented
correctly.


-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
