Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!uunet!inXS.uu.net!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: [Q] Projection-Method like kohonen-net
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <DuAuJz.L6@unx.sas.com>
Date: Tue, 9 Jul 1996 23:28:46 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <31D911BB.3A70@nbg.scn.de> <353619164wnr@chmqst.demon.co.uk> <k4tnhvojm.fsf@hilbert.edv.agrar.tu-muenchen.de>
Organization: SAS Institute Inc.
Lines: 47


In article <k4tnhvojm.fsf@hilbert.edv.agrar.tu-muenchen.de>, max@pollux.edv.agrar.tu-muenchen.de (Max Pilgram) writes:
|> In article <353619164wnr@chmqst.demon.co.uk> David Livingstone
|> <davel@chmqst.demon.co.uk> writes:
|> 
|>    You can train a network with a central "bottleneck" layer of 2 or 3 
|>    neurons to reproduce the input values on the output layer. Once the 
|>    network is trained the central layer neurons provide X and Y (Z) 
|>    coordinates for each sample so that you may produce a plot.
|>    We showed some examples of the use of a "bottleneck" network in:
|>    D.J. Livingstone, G. Hesketh and D. Clayworth, J. Mol. Graph., 9, 115-8,
|>    (1991). The networks compared well with PCA and non-linear mapping. I 
|>    have also compared this with Kohonen mapping in "Multivariate data 
|>    display using neural networks" in a book called "Neural networks in QSAR 
|>    and drug design" edited by J. Devillers and published by Academic Press. 
|>    It should be out anytime now as I fixed the proofs in April. I hope this 
|>    helps.
|> 
|> Dave,
|> 
|> could you tell me in short terms what kind of learning you are using
|> to train such a "bootleneck" network? 

Any of the usual methods for training feedforward networks.

|> Can this method be compared with
|> other nonlinear PCA algorithms?

With a single hidden layer of linear units, this _is_ PCA, in that a
bottleneck layer with k units learns the same subspace as the first k
principal components. Using sigmoid units in the hidden layer doesn't
buy much, in my experience. If you want a nonlinear generalization of
PCA, you need at least two hidden layers: a linear bottleneck layer
followed by another layer with nonlinear units. Such a network can, for
example, learn that a helix is intrinsically one-dimensional.  But you
need yet another hidden layer of nonlinear units before the bottleneck
for full generality. Such a network can, for example, learn a that long
helix bent around into a horseshoe shape is intrinsically
one-dimensional.



-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
