Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!newsgate.duke.edu!interpath!news.interpath.net!sas!newshost.unx.sas.com!saswss
From: saswss@hotellng.unx.sas.com (Warren Sarle)
Subject: Re: Do more outputs help?
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <Dy18Dq.J5o@unx.sas.com>
Date: Fri, 20 Sep 1996 13:05:50 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
References: <50i9pm$d4q@hpavua.lf.hp.com> <50vh61$ror@llnews.ll.mit.edu> <Dy0D3p.M6F@unx.sas.com>
Organization: SAS Institute Inc.
Lines: 61


Forwarded reply from Olle Gallmo <crwth@kay.docs.uu.se>:
> To add extra outputs is already an established method to supply extra
> information during training, and it does _not_ make the network harder
> to train, as one earlier respondee claimed. Training on several
> correlated targets at once, reduces the number of degrees of freedom
> for the hidden layer, which implies much faster training and often
> better generalization.
> 
> The method is sometimes referred to as "Extra Output Learning" or
> "Injection of Hints" and is due to Suddarth, Sutton and Holden
> [SSH88]. See also the other references listed below.
> 
> Our own paper on the subject [GC95] is available on the Web:
> 
>         http://www.docs.uu.se/docs/ann/papers.html
> 
>    /Crwth
> 
> 
> [A-M90] Y.S. Abu-Mostafa, Learning from Hints in Neural Networks,
>         Journal of Complexity, Vol. 6, pp. 192-198, 1990.
> 
> [BH90] A.B. Baruah & A.D.C. Holden, Back Propagation and Monotonic
>         Functions, Proceedings of the International Neural Network Conference,
>         vol. 1. pp. 383-386, Paris, France, 1990.
> 
> [GC95] O. Gdllmo & J. Carlstrvm, Some Experiments Using Extra Output
>         Learning to Hint Multi Layer Perceptrons, in L.F. Niklasson &
>         M.B. Boden (Eds.), Current Trends in Connectionism - Proceedings of
>         the 1995 Swedish Conference on Connectionism, pp. 179-190, Lawrence
>         Erlbaum, 1995.
> 
> [SK90] S.C. Suddarth & Y.L. Kergosien, Rule-Injection Hints as a Means
>         of Improving Network Performance and Learning Time, in L.B. Almeida &
>         C.J. Wellekens (Eds.), Neural Networks, Lecture Notes in Computer
>         Science, vol. 412, pp. 120-129, Springer Verlag, 1990.
> 
> [SSH88] S.C. Suddarth, S.A. Sutton & A.D.C. Holden, A Symbolic-Neural
>         Method for Solving Control Problems, 1988 IEEE International
>         Conference on Neural Networks, vol. 1, pp. 515-523, San Diego, CA,
>         1988.
> 
> [YS90]  Y.-H. Yu & R.F. Simmons, Extra Output Biased Learning,
>         Proceedings of the International Joint Conference on Neural Networks
>         (IJCNN-90), vol. 3, pp. 161-166, San Diego, CA, 1990.
> 
> -- 
> ----- If God is real, why did he create discontinuous functions? -----
> Olle Gdllmo, Dept. of Computer Systems, Uppsala University
> Snail Mail: Box 325, S-751 05 Uppsala, Sweden.    Tel: +46 18 18 10 09
> URL: http://www.docs.uu.se/~crwth                 Fax: +46 18 55 02 25
> Email: crwth@DoCS.UU.SE



-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
