Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!zombie.ncsc.mil!news.duke.edu!news-feed-1.peachnet.edu!gatech!newsxfer.itd.umich.edu!zip.eecs.umich.edu!caen!saimiri.primate.wisc.edu!ames!tulane!cs.cs.uno.edu!news
From: dnulu@mandelbrot.math.uno.edu (Deepak Nulu)
Subject: Re: Why hidden layer (not VERY stupid Q)
Message-ID: <1994Oct28.164657.19447@cs.uno.edu>
Sender: news@cs.uno.edu
Organization: University of New Orleans (Computer Science)
References: <v9110104-251094120955@igwemc25.vub.ac.be>
Date: Fri, 28 Oct 1994 16:46:57 GMT
Lines: 37

In article <v9110104-251094120955@igwemc25.vub.ac.be>  
v9110104@is2.vub.ac.be (Johan Ovlinger) writes:
> Assuming a feedforwrd net trained in the standard back prop way:
> 
> we can model an n layer net by matrix multiplication ( repr by :*):
> 
> outputs  == An * A(n-1) * ... * A1 * inputs
> 
> (** stuff deleted **)
>
> B ==  An * A(n-1) * ... * A1
> outputs == B * inputs
>
> (** stuff deleted **)
>
> regards,
> 
> 	johan

--

you have forgotten the non-linear activation/transfer functions of the  
neurons. the outputs is given by:

outputs  == Fn(An * Fn-1(A(n-1) * ... * F1(A1 * inputs)))

where F1,...,Fn are non-linear in the general case. hence the weight  
matrices cannot be multiplied to give a single matrix.

deepak.

=====================================================================
Deepak Nulu                     |       email:  dnulu@math.uno.edu
Graduate Student                |               dxnee@uno.edu
Dept. of Electrical Engg.       |
Dept. of Mathematics            |       phone:  (504) 283-4153
Univ. of New Orleans            |
