Newsgroups: delco.ai,comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!zombie.ncsc.mil!gmi!news1.oakland.edu!rcsuna.gmr.com!kocrsv01!news
From: ddturner@kocrsv01.delcoelect.com
Subject: Re: Why hidden layer (not VERY stupid Q) 
Content-Type: TEXT/PLAIN; charset=US-ASCII
Message-ID: <1994Oct26.140047.8621@kocrsv01.delcoelect.com>
Followup-To: comp.ai.neural-nets
Sender: news@kocrsv01.delcoelect.com (Usenet News Account)
Nntp-Posting-Host: koadpc20.delcoelect.com
Organization: Delco Electronics Corp.
X-Newsreader: NEWTNews & Chameleon -- TCP/IP for MS Windows from NetManage
References: <v9110104-251094120955@igwemc25.vub.ac.be>
     
Mime-Version: 1.0
Date: Wed, 26 Oct 1994 15:41:21 GMT
Lines: 63


In article <v9110104-251094120955@igwemc25.vub.ac.be>, <v9110104@is2.vub.ac.be> 
writes:
> 
> Assuming a feedforwrd net trained in the standard back prop way:
> 
> we can model an n layer net by matrix multiplication ( repr by :*):
> 
> outputs  == An * A(n-1) * ... * A1 * inputs
> 
> Linear algebra tells us that we can preform the multiplications in any
> order, so  it is equally valid to propagate the inputs through in the
> obviuos manner as it is to multiply the whole net into one composite matrix
> :
> 
> B ==  An * A(n-1) * ... * A1
> outputs == B * inputs
> 
> However, B has no hidden layers, therefore my question:
> 
> Why do we need hidden layers?  
> 
> I've always been told we do, but I don't see why.  Is it in the training
> that a hidden layer is needed?
> 
> Excuses for possible blatant miscomprehension of basic priciples.
> 
> regards,
> 
> 	johan

I am afraid you forgot about that little pesky non-linear transfer function 
that each neurode has built into it.  However, let me toss an idea out to 
everyone to think about.  For my particular neural net application, I ran 
some test a while back and noticed that after training in a normal fashion, I 
could run using a limited linear transfer function.  As I recall it was:

	1		for x>2
y(x) =	0.25x + 0.5	for -2<=x<=2
	0		for x<-2

where x = the sum of products for a neurode and y is the neurode output.

Thus there might be some way to eventually come up with a simple matrix 
multiplication scheme depending on how you train and the input data.  The key 
would be how do you insure that the value x is always between [-2,2].  If it 
goes outside, then you are back to having a non-linear transfer function.

As a side note, I do have only one output; so, I do not need the output layer 
to compute the non-linear transfer function.  I just reverse calculate to 
figure out what the sum of products is for the conditions I am looking for.

Hope this helps.


,_____                          .      .                           |>>
     \\                   .                  .                     |
   0 //            .                            .                  |
   \-\         .        Douglas D. Turner        .                 |
   | |     .     Sr. Electronics Design Engineer  .   .  .         |
  /  |  .           Advanced Vehicle Systems       . .    .        |
 / . |            Delco Electronics Corporation     .      ......o |
----------------ddturner@kocrsv01.delcoelect.com-----------------------
