Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!usenet.eel.ufl.edu!news.mathworks.com!newsfeed.internetmci.com!howland.erols.net!news.sprintlink.net!news-stk-200.sprintlink.net!news.sprintlink.net!news-chi-13.sprintlink.net!interpath!news.interpath.net!sas!newshost.unx.sas.com!hotellng.unx.sas.com!saswss
From: saswss@unx.sas.com (Warren Sarle)
Subject: changes to "comp.ai.neural-nets FAQ" -- monthly posting
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <nn.changes.posting_841287641@hotellng.unx.sas.com>
Supersedes: <nn.changes.posting_838609239@hotellng.unx.sas.com>
Date: Thu, 29 Aug 1996 03:00:42 GMT
Expires: Thu, 3 Oct 1996 03:00:41 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
Reply-To: saswss@unx.sas.com (Warren Sarle)
Organization: SAS Institute Inc., Cary, NC, USA
Keywords: modifications, new, additions, deletions
Followup-To: comp.ai.neural-nets
Lines: 709

==> nn1.changes.body <==
*** nn1.oldbody	Sun Jul 28 23:00:15 1996
--- nn1.body	Wed Aug 28 23:00:08 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-06-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-08-20
  URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 403,406 ****
--- 403,420 ----
     Morgan Kauffman starting in 1989. 
  
+ There is an on-line application of a Kohonen network with a 2-dimensional
+ output layer for prediction of protein secondary structure percentages from
+ UV circular dichroism spectra. According to J.J. Merelo: 
+ 
+    You only need to submit 41 CD values ranging from 200 nm to 240 nm
+    (given in deg cm^2 dmol^-1 multiplied by 0.001) and the k2d server
+    gives back the estimated percentages of helix, beta and rest of
+    secondary structure of your protein plus an estimation of the
+    accuracy of the prediction. 
+ 
+ The address of the k2d server is http://kal-el.ugr.es/k2d/spectra.html. The
+ home page of the k2d program is at http://kal-el.ugr.es/k2d/k2d.html or 
+ http://www.embl-heidelberg.de/~andrade/k2d.html. 
+ 
  ------------------------------------------------------------------------
  
***************
*** 484,488 ****
   o Kohonen's self-organizing maps. 
   o Reinforcement learning ((although this is treated in the operations
!    research literature as Markov decision processes). 
   o Stopped training (the purpose and effect of stopped training are similar
     to shrinkage estimation, but the method is quite different). 
--- 498,502 ----
   o Kohonen's self-organizing maps. 
   o Reinforcement learning ((although this is treated in the operations
!    research literature on Markov decision processes). 
   o Stopped training (the purpose and effect of stopped training are similar
     to shrinkage estimation, but the method is quite different). 
***************
*** 557,560 ****
--- 571,578 ----
     Cheng, B. and Titterington, D.M. (1994), "Neural Networks: A Review from
     a Statistical Perspective", Statistical Science, 9, 2-54. 
+ 
+    Cherkassky, V., Friedman, J.H., and Wechsler, H., eds. (1994), From
+    Statistics to Neural Networks: Theory and Pattern Recognition
+    Applications, Berlin: Springer-Verlag. 
  
     Geman, S., Bienenstock, E. and Doursat, R. (1992), "Neural Networks and

==> nn2.changes.body <==
*** nn2.oldbody	Sun Jul 28 23:00:20 1996
--- nn2.body	Wed Aug 28 23:00:17 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-07-13
  URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-08-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 154,157 ****
--- 154,162 ----
  gradients, Levenberg-Marquardt, etc.?". 
  
+ More on-line info on backprop:
+ 
+    Donald Tveter's Backpropagator's Review at 
+    http://www.mcs.com/~drt/bprefs.html. 
+ 
  References on backprop: 
  
***************
*** 176,179 ****
--- 181,188 ----
     Annals of Mathematical Statistics, 22, 400-407. 
  
+    Kiefer, J. & Wolfowitz, J. (1952), "Stochastic Estimation of the Maximum
+    of a Regression Function," Annals of Mathematical Statistics, 23,
+    462-466. 
+ 
     Kushner, H. & Clark, D. (1978), Stochastic Approximation Methods for
     Constrained and Unconstrained Systems, Springer-Verlag. 
***************
*** 246,249 ****
--- 255,259 ----
   o Arnold Neumaier's page on global optimization at 
     http://solon.cma.univie.ac.at/~neum/glopt.html. 
+  o 'Simon Streltsovs page on global optimization at http://cad.bu.edu/go. 
  
  References: 
***************
*** 651,657 ****
  use the term "combination function". 
  
! The multilayer perceptron (MLP) has one or more hidden layers for which the
  combination function is the inner product of the inputs and weights, plus a
! bias. The activation function is typically a logistic or tanh function.
  Hence the formula for the activation is typically: 
  
--- 661,667 ----
  use the term "combination function". 
  
! A multilayer perceptron (MLP) has one or more hidden layers for which the
  combination function is the inner product of the inputs and weights, plus a
! bias. The activation function is usually a logistic or tanh function.
  Hence the formula for the activation is typically: 
  
***************
*** 674,681 ****
  
  Radial basis function (RBF) networks usually have only one hidden layer for
! which the combination function is the Euclidean distance between the input
! vector and the weight vector, divided by the squared width. There may also
! be another term added to the combination function, which determines what I
! will call the "altitude" of the unit. 
  
  There are two distinct types of Gaussian RBF architectures. The first type
--- 684,712 ----
  
  Radial basis function (RBF) networks usually have only one hidden layer for
! which the combination function is based on the Euclidean distance between
! the input vector and the weight vector. RBF networks do not have anything
! that's exactly the same as the bias term in an MLP. But some types of RBFs
! have a "width" associated with each hidden unit or with the the entire
! hidden layer; instead of adding it in the combination function like a bias,
! you divide the Euclidean distance by the width. 
! 
! To see the similarity between RBF networks and MLPs, it is convenient to
! treat the combination function as the square of distance/width. Then the
! familiar exp or softmax activation functions produce members of the
! popular class of Gaussian RBF networks. It can also be useful to add another
! term to the combination function that determines what I will call the
! "altitude" of the unit. I have not seen altitudes used in the NN literature;
! if you know of a reference, please tell me (saswss@unx.sas.com). 
! 
! The output activation function in RBF networks is usually the identity. The
! identity output activation function is a computational convenience in
! training (see Hybrid training and the curse of dimensionality) but it is
! possible and often desirable to use other output activation functions just
! as you would in an MLP. 
! 
! There are many types of radial basis functions. Gaussian RBFs seem to be the
! most popular by far in the NN literature. In the statistical literature,
! thin plate splines are also used (Green and Silverman 1994). This FAQ will
! concentrate on Gaussian RBFs. 
  
  There are two distinct types of Gaussian RBF architectures. The first type
***************
*** 1053,1056 ****
--- 1084,1091 ----
     Amsterdam: North-Holland. 
  
+    Green, P.J. and Silverman, B.W. (1994), Nonparametric Regression and
+    Generalized Linear Models: A roughness penalty approach,, London:
+    Chapman & Hall. 
+ 
     Hastie, T.J. and Tibshirani, R.J. (1990) Generalized Additive Models,
     London: Chapman & Hall. 
***************
*** 1715,1721 ****
  varieties of unsupervised learning, the targets are the same as the inputs
  (Sarle 1994). In other words, unsupervised learning usually performs the
! same task as an auto-associative network, compressing the information in the
! inputs (Deco and Obradovic 1996). Unsupervised learning is very useful for
! data visualization (Ripley 1996), although the NN literature generally
  ignores this application. 
  
--- 1750,1756 ----
  varieties of unsupervised learning, the targets are the same as the inputs
  (Sarle 1994). In other words, unsupervised learning usually performs the
! same task as an auto-associative network, compressing the information from
! the inputs (Deco and Obradovic 1996). Unsupervised learning is very useful
! for data visualization (Ripley 1996), although the NN literature generally
  ignores this application. 
  
***************
*** 1750,1757 ****
  Perhaps the most novel form of unsupervised learning in the NN literature is
  Kohonen's self-organizing (feature) map (SOM, Kohonen 1995). SOMs combine
! competitive learning with dimensionality reduction. But the original SOM
! algorithm does not optimize an energy function (Kohonen 1995, p. 237) and so
! is not an information-compression method like most other unsupervised
! learning networks. 
  
  References: 
--- 1785,1797 ----
  Perhaps the most novel form of unsupervised learning in the NN literature is
  Kohonen's self-organizing (feature) map (SOM, Kohonen 1995). SOMs combine
! competitive learning with dimensionality reduction by smoothing the clusters
! with respect to an a priori grid (Kohonen 1995; Mulier and Cherkassky 1995).
! But the original SOM algorithm does not optimize an "energy" function
! (Kohonen 1995, pp. 126, 237) and so is not simply an information-compression
! method like most other unsupervised learning networks. Convergence of
! Kohonen's SOM algorithm is allegedly demonstrated by Yin and Allinson
! (1995), but their "proof" assumes the neighborhood size becomes zero, in
! which case the algorithm reduces to AVQ and no longer has topological
! ordering properties (Kohonen 1995, p. 111). 
  
  References: 
***************
*** 1797,1800 ****
--- 1837,1843 ----
     Mathematical Statistics and Probability, 1, 281-297. 
  
+    Mulier, F. and Cherkassky, V. (1995), "Self-Organization as an Iterative
+    Kernel Smoothing Process," Neural Computation, 7, 1165-1177. 
+ 
     Pearson, K. (1901) "On Lines and Planes of Closest Fit to Systems of
     Points in Space," Phil. Mag., 2(6), 559-572. 
***************
*** 1810,1813 ****
--- 1853,1860 ----
     International Conference, Cary, NC: SAS Institute Inc., pp 1538-1550, 
     ftp://ftp.sas.com/pub/neural/neural1.ps. 
+ 
+    Yin, H. and Allinson, N.M. (1995), "On the Distribution and Convergence
+    of Feature Space in Self-Organizing Maps," Neural Computation, 7,
+    1178-1187. 
  
  ------------------------------------------------------------------------

==> nn3.changes.body <==
*** nn3.oldbody	Sun Jul 28 23:00:24 1996
--- nn3.body	Wed Aug 28 23:00:22 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part3
! Last-modified: 1996-06-25
  URL: ftp://ftp.sas.com/pub/neural/FAQ3.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part3
! Last-modified: 1996-07-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ3.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 886,901 ****
  
  Bear in mind that with two or more inputs, an MLP with one hidden layer
! containing just a few units can fit only a limited variety of target
  functions. Even simple, smooth surfaces such as a Gaussian bump in two
  dimensions may require 20 to 50 hidden units for a close approximation.
  Networks with a smaller number of hidden units often produce spurious ridges
! and valleys in the output surface (see Chester 1990 and the very large
! (885K) example in ftp://ftp.sas.com/pub/neural/tnnex_hillplat_mlp.ps and,
! for more explanation, "How do MLPs compare with RBFs?") Training a network
! with 20 hidden units will typically require anywhere from 150 to 2500
! training cases if you do not use early stopping or regularization. Hence, if
! you have a smaller training set than that, it is usually advisable to use
! early stopping or regularization rather than to restrict the net to a small
! number of hidden units. 
  
  References: 
--- 886,904 ----
  
  Bear in mind that with two or more inputs, an MLP with one hidden layer
! containing only a few units can fit only a limited variety of target
  functions. Even simple, smooth surfaces such as a Gaussian bump in two
  dimensions may require 20 to 50 hidden units for a close approximation.
  Networks with a smaller number of hidden units often produce spurious ridges
! and valleys in the output surface (see Chester 1990 and "How do MLPs compare
! with RBFs?") Training a network with 20 hidden units will typically require
! anywhere from 150 to 2500 training cases if you do not use early stopping or
! regularization. Hence, if you have a smaller training set than that, it is
! usually advisable to use early stopping or regularization rather than to
! restrict the net to a small number of hidden units. 
! 
! Ordinary RBF networks containing only a few hidden units also produce
! peculiar, bumpy output functions. Normalized RBF networks are better at
! approximating simple smooth surfaces with a small number of hidden units
! (see How do MLPs compare with RBFs?). 
  
  References: 
***************
*** 1061,1064 ****
  ------------------------------------------------------------------------
  
! Next part is part 4 (of 7). Previous part is part 2. @
  
--- 1064,1067 ----
  ------------------------------------------------------------------------
  
! Next part is part 4 (of 7). Previous part is part 2. 
  

==> nn4.changes.body <==
*** nn4.oldbody	Sun Jul 28 23:00:28 1996
--- nn4.body	Wed Aug 28 23:00:27 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part4
! Last-modified: 1996-07-05
  URL: ftp://ftp.sas.com/pub/neural/FAQ4.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part4
! Last-modified: 1996-08-15
  URL: ftp://ftp.sas.com/pub/neural/FAQ4.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 180,185 ****
  Have the authors never heard that "a picture is worth a thousand words"?
  What few diagrams they have (such as the one on p. 74) tend to be confusing.
! Their jargon is peculiar even by NN standards. 
  
  Chester, M. (1993). Neural Networks: A Tutorial, Englewood Cliffs, NJ: PTR
  Prentice Hall. 
--- 180,196 ----
  Have the authors never heard that "a picture is worth a thousand words"?
  What few diagrams they have (such as the one on p. 74) tend to be confusing.
! Their jargon is peculiar even by NN standards. As is evident from claims
! such as (p. 202): 
  
+    Unlike the backpropagation network, a counterpropagation network
+    cannot be fooled into finding a local minimum solution. This means
+    that the network is guaranteed to find the correct response ... to an
+    input, no matter what. 
+ 
+ the authors do not understand elementary properties of error functions and
+ optimization algorithms. Like most introductory books, this one neglects the
+ difficulties of getting good generalization--the authors simply declare (p.
+ 8) that "A neural network is able to generalize"! 
+ 
  Chester, M. (1993). Neural Networks: A Tutorial, Englewood Cliffs, NJ: PTR
  Prentice Hall. 
***************
*** 263,266 ****
--- 274,291 ----
  "Probably not 'leading edge' stuff but detailed enough to get your hands
  dirty!"
+ 
+ Swingler , K. (1996), Applying Neural Networks: A Practical Guide, London:
+ Academic Press. 
+ This book has lots of good advice liberally sprinkled with errors, some bad
+ advice, and the occasional howler. Experts will learn nothing, while
+ beginners will be unable to separate the useful information from the
+ dangerous. The most ludicrous thing I've found in the book is the claim that
+ Hecht-Neilson used Kolmogorov's theorem to show that "you will never require
+ more than twice the number of hidden units as you have inputs" (p. 53) in an
+ MLP with one hidden layer. Hecht-Neilson has made an occasional published
+ mistake himself, but I am sure he has never said anything this idiotic! Then
+ Swingler goes on to say that Kurkova, V. (1991), "Kolmogorov's theorem is
+ relevant," Neural Computation, 3, 617-622, confirmed this alleged upper
+ bound on the number of hidden units--this is a gross insult to Kurkova! 
  
  Wasserman, P. D. (1989). Neural Computing: Theory & Practice. Van Nostrand

==> nn5.changes.body <==
*** nn5.oldbody	Sun Jul 28 23:00:31 1996
--- nn5.body	Wed Aug 28 23:00:31 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part5
! Last-modified: 1996-06-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ5.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part5
! Last-modified: 1996-08-28
  URL: ftp://ftp.sas.com/pub/neural/FAQ5.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 65,68 ****
--- 65,69 ----
  30. AINET 
  31. DemoGNG 
+ 32. PMNEURO 1.0a 
  
  See also http://www.emsl.pnl.gov:2080/docs/cie/neural/systems/shareware.html
***************
*** 210,220 ****
  +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  
!    The type of neural net used in the Atree 3.0 Educational Kit (EK) package
!    differs from the traditional one. Logic functions AND and OR form the
!    units in all hidden layers but the first, which uses simple perceptrons.
!    Though this net can't compute real-valued outputs, since its outputs
!    are strictly boolean, it can easily and naturally represent real valued
!    functions by giving a 0 above the function's graph and a 1 otherwise.
!    This unorthodox approach is extremely useful, since it allows the user to
     impose constraints on the functions to be learned (monotonicity, bounds
     on slopes, convexity,...). Very rapid computation of functions is done by
--- 211,221 ----
  +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  
!    The Atree 3.0 Educational Kit (EK) serves to develop simple applications
!    using adaptive Logic Networks (ALNs). In an ALN, logic functions AND and
!    OR make up all hidden layers but the first, which uses simple
!    perceptrons. Though this net can't compute real-valued outputs, since
!    its outputs are strictly boolean, it can easily and naturally represent
!    real valued functions by giving a 0 above the function's graph and a 1
!    otherwise. This approach is extremely useful, since it allows the user to
     impose constraints on the functions to be learned (monotonicity, bounds
     on slopes, convexity,...). Very rapid computation of functions is done by
***************
*** 224,232 ****
     Two simple languages describe ALNs and the steps of training an ALN.
     Execution software for ALN decision trees resulting from training is
!    provided in C source form for experimenters. EK and a brief guide are
!    obtained by anonymous ftp from ftp.cs.ualberta.ca in directory
!    /pub/atree/atree3/. Get the files atree3ek.exe and atree3ek.brief.guide. 
  
!    An extensive User's Guide with an introduction to basic ALN theory is
     available on WWW at http://www.cs.ualberta.ca/~arms/guide/ch0.htm . This
     Educational Kit software is the same as the commercial Atree 3.0 program
--- 225,233 ----
     Two simple languages describe ALNs and the steps of training an ALN.
     Execution software for ALN decision trees resulting from training is
!    provided in C source form for experimenters. EK and a "120-page" User's
!    Guide are obtained by anonymous ftp from ftp.cs.ualberta.ca in directory
!    /pub/atree/atree3/. Get the file atree3ek.exe (~900K) . 
  
!    The above User's Guide with an introduction to basic ALN theory is
     available on WWW at http://www.cs.ualberta.ca/~arms/guide/ch0.htm . This
     Educational Kit software is the same as the commercial Atree 3.0 program
***************
*** 600,617 ****
     of the accompanying paper. 
  
! For some of these simulators there are user mailing lists. Get the packages
! and look into their documentation for further info.
  
! If you are using a small computer (PC, Mac, etc.) you may want to have a
! look at the Central Neural System Electronic Bulletin Board (see question 
! "Other sources of information"). Modem: 409-737-5222; Sysop: Wesley R.
! Elsberry; 4160 Pirates' Beach, Galveston, TX, USA; welsberr@orca.tamu.edu.
! There are lots of small simulator packages, the CNS ANNSIM file set. There
! is an ftp mirror site for the CNS ANNSIM file set at me.uta.edu
! [129.107.2.20] in the /pub/neural directory. Most ANN offerings are in 
! /pub/neural/annsim. 
  
  ------------------------------------------------------------------------
  
! Next part is part 6 (of 7). Previous part is part 4. @
  
--- 601,638 ----
     of the accompanying paper. 
  
! 32. PMNEURO 1.0a 
! +++++++++++++++++
  
!    PMNEURO 1.0a is available at:
  
+    ftp://ftp.uni-stuttgart.de/pub/systems/os2/misc/pmneuro.zip
+ 
+    PMNEURO 1.0a creates neuronal networks (backpropagation); propagation
+    results can be used as new training input for creating new networks and
+    following propagation trials.
+ 
+ 
+ 
+ 
+ For some of these simulators there are user mailing lists. Get the
+ packages and look into their documentation for further info.
+ 
+ 
+ If you are using a small computer (PC, Mac, etc.) you may want to have
+ a look at the  Central Neural System Electronic Bulletin Board
+ (see question "Other sources of information").
+ Modem: 409-737-5222; Sysop: Wesley R. Elsberry; 4160
+ Pirates' Beach, Galveston, TX, USA; welsberr@orca.tamu.edu.
+ There are lots of small simulator packages, the CNS ANNSIM file set.
+ There is an ftp mirror site for the CNS ANNSIM file set at
+ me.uta.edu [129.107.2.20] in the
+ /pub/neural directory.
+ Most ANN offerings are in
+ /pub/neural/annsim.
+ 
+ 
  ------------------------------------------------------------------------
  
! Next part is part 6 (of 7). Previous part is part
! 4. 
  

==> nn6.changes.body <==
*** nn6.oldbody	Sun Jul 28 23:00:36 1996
--- nn6.body	Wed Aug 28 23:00:35 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-06-17
  URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-08-28
  URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 67,71 ****
  22. NeuroGenetic Optimizer (NGO) Version 2.0 
  23. WAND 
! 24. Atree 3.0 Adaptive Logic Network 
  25. TDL v. 1.1 (Trans-Dimensional Learning) 
  26. NeurOn-Line 
--- 67,71 ----
  22. NeuroGenetic Optimizer (NGO) Version 2.0 
  23. WAND 
! 24. Atree 3.0 Adaptive Logic Network Development System 
  25. TDL v. 1.1 (Trans-Dimensional Learning) 
  26. NeurOn-Line 
***************
*** 74,77 ****
--- 74,78 ----
  29. Neural Connection 
  30. Pattern Recognition Workbench Expo/PRO/PRO+ 
+ 31. PREVia 
  
  See also http://www.emsl.pnl.gov:2080/docs/cie/neural/systems/software.html 
***************
*** 974,1026 ****
     Canada T6J 1B4
  
!    tel/fax +1 (403) 438-8285
     or email William W. Armstrong, President (arms@cs.ualberta.ca)
     or use the Dendronic forum on CompuServe (GO DENDRONIC)
  
!    Atree 3.0 trains feedforward networks having perceptrons in
!    the first hidden layer and logic gates AND and OR in other hidden
!    layers.  Functions from real inputs to a real output can be represented.
! 
!    Users can specify constraints on monotonicity, derivatives (slopes)
!    and convexity of functions being learned.  Such expert knowledge
!    can be used to ensure the result of training satisfies requirements
!    of known physical or economic laws.  Functions can be inverted without
!    additional training, a capability useful in control applications.
! 
!    The execution library, which computes learned functions at high
!    speed, is offered in source form (code suitable for Windows and
!    Unix is available free of charge).  Atree 3.0 outputs ALN decision
!    trees in human-readable form (for checking) as well as in binary form
!    (for fast reloading).  The commercial license allows redistribution
!    and modification of execution code.
  
     Atree 3.0 may be used for data analysis, prediction, pattern recognition
!    and for real-time control applications that must run on a typical computer
!    (such as a PC).  Scripts can be run automatically and can be called from
!    macros in Microsoft Excel and MS Access or from other applications.
!    Many samples showing how to use Atree 3.0 are provided.
! 
!    The open architecture of the execution library is important
!    when outputs have to be checked for conformity to a specification.
!    The user is entirely responsible for making his/her applications
!    safe to use, of course, but the openness of everything concerned with
!    execution of the results of Atree 3.0 training supports that goal.
! 
!    A manual of approximately 100 pages will be supplied.
! 
!    Introductory price until March 31,1996: $99 US (or $125 Canadian
!    for Canadian residents only -- price includes GST).  Sending a
!    bank draft or money order is recommended.  Personal or corporate
!    cheques drawn on a US bank (or on Canadian bank, in Canada) are
!    acceptable.  Credit card orders are not accepted at this time.
!    Please make cheques payable to Dendronic Decisions Limited.
! 
!    The software can be tried out using the Atree 3.0
!    Educational Kit, available via anonymous ftp from ftp.cs.ualberta.ca
!    in directory /pub/atree/atree3/. See files atree3ek.exe
!    and atree3ek.brief.guide.
!    The Educational Kit is restricted to learning functions with one or two
!    inputs. A built-in 2D and 3D plotting capability is useful to help
!    the user understand how ALNs work.
  
  25. TDL v. 1.1 (Trans-Dimensional Learning)
--- 975,1019 ----
     Canada T6J 1B4
  
!    Tel/Fax +1 (403) 438-8285
     or email William W. Armstrong, President (arms@cs.ualberta.ca)
     or use the Dendronic forum on CompuServe (GO DENDRONIC)
+    or visit Dendronic Decisions' website: www.dendronic.com . 
  
!    Atree 3.0 trains feedforward networks having simple perceptrons in the
!    first hidden layer and logic gates AND and OR in other hidden layers.
!    Functions from real inputs to a real output can be represented, and are
!    computed by using maximum and minimum operations on linear functions,
!    accelerated by an ALN decision tree. 
! 
!    Users can specify constraints on monotonicity, derivatives (slopes) and
!    convexity of functions being learned. Such expert knowledge can be used
!    to ensure the result of training satisfies requirements of known physical
!    or economic laws. Functions can be inverted without additional training,
!    a capability useful in control applications. 
! 
!    The execution library, which computes learned functions at high speed
!    using ALN decision trees, is offered in source form (code suitable for
!    Windows and Unix is available free of charge). Atree 3.0 outputs ALN
!    decision trees in human-readable form (for checking) as well as in binary
!    form (for fast reloading). The commercial license allows redistribution
!    and modification of execution code. 
  
     Atree 3.0 may be used for data analysis, prediction, pattern recognition
!    and for real-time control applications that must run on a typical
!    computer (such as a PC). Scripts can be run automatically and can be
!    called from macros in Microsoft Excel and MS Access or from other
!    applications. Many samples showing how to use Atree 3.0 are provided. 
! 
!    A manual of approximately 120 pages is included. It covers the Atree 3.0
!    software and some theory of ALNs. 
! 
!    Price: $269 Canadian (about $199 US) plus shipping and handling with
!    generous academic and quantity discounts. Visit www.dendronic.com for a
!    price list or call toll-free 1-888-370-5926 (in North America) for
!    ordering information. 
! 
!    The software can be tried out with the help of an Educational Kit (EK)
!    and an extensive User Guide which are available on the WWW. See 
!    information about the Atree 3.0 Educational Kit. 
  
  25. TDL v. 1.1 (Trans-Dimensional Learning)
***************
*** 1372,1375 ****
--- 1365,1437 ----
     literally hundreds of models, selecting the best ones from a thorough
     search space, ultimately resulting in better solutions! 
+ 
+ 31. PREVia
+ ++++++++++
+ 
+    PREVia is a simple Neural Network-based forecasting tool. The current
+    commercial version is available in French and English (the downloadable
+    version is in English). A working demo version of PREVia is available for
+    download at: http://www.elseware-fr.com/prod01.htm. This software is
+    being used mainly in France, by banks and some of the largest investment
+    companies, such as: Banque de France (French Central Bank), AXA Asset
+    Management, Credit Lyonnais Asset Management, Caisse des Depots AM,
+    Banque du Luxembourg and others. In order to enhance the research and
+    applications using PREVia, it has been given free of charge to European
+    Engineering and Business Schools, such as Ecole Centrale Paris, London
+    Business School, EURIA, CEFI, Universite du Luxembourg. Interested
+    universities and schools should contact Elseware at the forementioned
+    page. 
+ 
+    Introducing Previa
+    ------------------
+ 
+    Based on a detailed analysis of the forecasting decision process, Previa
+    was jointly designed and implemented by experts in economics and finance,
+    and neural network systems specialists including both mathematicians and
+    computer scientists. Previa thus enables the experimentation, testing,
+    and validation of numerous models. In a few hours, the forecasting expert
+    can conduct a systematic experimentation, generate a study report, and
+    produce an operational forecasting model. The power of Previa stems from
+    the model type used, i.e., neural networks. Previa offers a wide range of
+    model types, hence allowing the user to create and test several
+    forecasting systems, and to assess each of them with the same set of
+    criteria. In this way, Previa offers a working environment where the user
+    can rationalise his or her decision process. The hardware requirements of
+    Previa are: an IBM-compatible PC with Windows 3.1 (c) or Windows95. For
+    the best performance, an Intel 486DX processor is recommended. Previa is
+    delivered as a shrink-wrapped application software, as well as a dynamic
+    link library (DLL) for the development of custom software. The DLL
+    contains all the necessary functions and data structures to manipulate
+    time series, neural networks, and associated algorithms. The DLL can also
+    be used to develop applications with Visual BasicTM. A partial list of
+    features: 
+ 
+    * Definition of a forecast equation : *
+      Definition of the variable to forecast and explanatory variables.
+      Automatic harmonisation of the domains and periodicities involved in 
+         the equation.
+    * Choice of a neuronal model associated with the forecasting equation :  *
+      Automatic or manual definition of multi-layered architectures.
+      Temporal models with loop-backs of intermediate layers.
+    * Fine-tuning of a neuronal model by training *
+      Training by gradient back-propagation.
+      Automatic simplification of architectures.
+      Definition of training objectives by adaptation of the optimisation 
+         criterion.
+      Definition of model form constraints.
+      Graphing of different error criteria.
+    * Analysis of a neuronal model: *
+      View of Hinton graph associated with each network layer.
+      Connection weight editing.
+      Calculation of sensitivity and elasticity of the variable to forecast, 
+         in relation to the explanatory variables.
+      Calculation of the hidden series produced by the neural network.
+    * Neural Network-Based Forecasting *
+      Operational use of a neural network.
+    * Series Analysis *
+      Visualisation of a series curve.  Editing of the series values.
+      Smoothing (simple, double, Holt & Winters)
+      Study of the predictability of a series (fractal dimension)
+      Comparison of two series.  Visualisation of X-Y graphs.
  
  ------------------------------------------------------------------------

==> nn7.changes.body <==
-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
