Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!newsfeed.internetmci.com!in2.uu.net!news.interpath.net!sas!mozart.unx.sas.com!hotellng.unx.sas.com!saswss
From: saswss@unx.sas.com (Warren Sarle)
Subject: changes to "comp.ai.neural-nets FAQ" -- monthly posting
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <nn.changes.posting_825566444@hotellng.unx.sas.com>
Supersedes: <nn.changes.posting_823026455@hotellng.unx.sas.com>
Date: Thu, 29 Feb 1996 04:00:45 GMT
Expires: Thu, 4 Apr 1996 04:00:44 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
Reply-To: saswss@unx.sas.com (Warren Sarle)
Organization: SAS Institute Inc., Cary, NC, USA
Keywords: modifications, new, additions, deletions
Followup-To: comp.ai.neural-nets
Lines: 773

==> nn1.changes.body <==
*** nn1.oldbody	Tue Jan 30 13:26:54 1996
--- nn1.body	Wed Feb 28 23:00:17 1996
***************
*** 1,5 ****
- 
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-01-06
  URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-02-16
  URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 236,244 ****
  =======================================
  
! First of all, when we are talking about a neural network, we *should*
! usually better say "artificial neural network" (ANN), because that is what
! we mean most of the time. Biological neural networks are much more
! complicated in their elementary structures than the mathematical models we
! use for ANNs.
  
  A vague description is as follows:
--- 235,242 ----
  =======================================
  
! First of all, when we are talking about a neural network, we should more
! properly say "artificial neural network" (ANN), because that is what we mean
! most of the time. Biological neural networks are much more complicated in
! their elementary structures than the mathematical models we use for ANNs.
  
  A vague description is as follows:
***************
*** 245,252 ****
  
  An ANN is a network of many simple processors ("units"), each possibly
! having a (small amount of) local memory. The units are connected by
! unidirectional communication channels ("connections"), which carry numeric
! (as opposed to symbolic) data. The units operate only on their local data
! and on the inputs they receive via the connections.
  
  The design motivation is what distinguishes neural networks from other
--- 243,250 ----
  
  An ANN is a network of many simple processors ("units"), each possibly
! having a small amount of local memory. The units are connected by
! unidirectional communication channels ("connections"), which usually carry
! numeric (as opposed to symbolic) data. The units operate only on their local
! data and on the inputs they receive via the connections.
  
  The design motivation is what distinguishes neural networks from other
***************
*** 258,268 ****
  
  Most neural networks have some sort of "training" rule whereby the weights
! of connections are adjusted on the basis of presented patterns. In other
! words, neural networks "learn" from examples, just like children learn to
! recognize dogs from examples of dogs, and exhibit some structural capability
! for generalization.
  
  Neural networks normally have great potential for parallelism, since the
! computations of the components are independent of each other. 
  
  ------------------------------------------------------------------------
--- 256,270 ----
  
  Most neural networks have some sort of "training" rule whereby the weights
! of connections are adjusted on the basis of data. In other words, neural
! networks "learn" from examples, just like children learn to recognize dogs
! from examples of dogs, and exhibit some structural capability for
! generalization.
  
  Neural networks normally have great potential for parallelism, since the
! computations of the components are independent of each other. Some people
! regard massive parallelism and high connectivity to be defining
! characteristics of neural networks, but such requirements rule out various
! simple models, such as simple linear regression, which are usefully regarded
! as special cases of neural networks. 
  
  ------------------------------------------------------------------------
***************
*** 278,286 ****
  arbitrary precision by feedforward NNs (which is the most often used type).
  
! In practice, NNs are especially useful for mapping problems which are
! tolerant of some errors, have lots of example data available, but to which
! hard and fast rules cannot easily be applied. NNs are, at least today,
! difficult to apply successfully to problems that concern manipulation of
! symbols and memory. 
  
  ------------------------------------------------------------------------
--- 280,288 ----
  arbitrary precision by feedforward NNs (which is the most often used type).
  
! In practice, NNs are especially useful for function approximation/mapping
! problems which are tolerant of some errors, have lots of example data
! available, but to which hard and fast rules cannot easily be applied. NNs
! are, at least today, difficult to apply successfully to problems that
! concern manipulation of symbols and memory. 
  
  ------------------------------------------------------------------------

==> nn2.changes.body <==
*** nn2.oldbody	Tue Jan 30 13:27:00 1996
--- nn2.body	Wed Feb 28 23:00:22 1996
***************
*** 1,5 ****
- 
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-01-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-02-22
  URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 104,119 ****
  ========================================================
  
! One way of looking at the need for bias inputs is that the inputs to each
! unit in the net define an N-dimensional space, and the unit draws a
! hyperplane through that space, producing an "on" output on one side and an
! "off" output on the other. (With sigmoid units the plane will not be sharp
! -- there will be some gray area of intermediate values near the separating
! plane -- but ignore this for now.)
  The weights determine where this hyperplane is in the input space. Without a
! bias input, this separating plane is constrained to pass through the origin
! of the hyperspace defined by the inputs. For some problems that's OK, but in
! many problems the plane would be much more useful somewhere else. If you
! have many units in a layer, they share the same input space and without bias
! would ALL be constrained to pass through the origin. 
  
  Activation functions are needed to introduce nonlinearity into the network.
--- 103,119 ----
  ========================================================
  
! Consider a multilayer perceptron. Choose any hidden unit or output unit.
! Let's say there are N inputs to that unit, which define an N-dimensional
! space. The given unit draws a hyperplane through that space, producing an
! "on" output on one side and an "off" output on the other. (With sigmoid
! units the plane will not be sharp -- there will be some gray area of
! intermediate values near the separating plane -- but ignore this for now.)
! 
  The weights determine where this hyperplane is in the input space. Without a
! bias input, this separating hyperplane is constrained to pass through the
! origin of the space defined by the inputs. For some problems that's OK, but
! in many problems the hyperplane would be much more useful somewhere else. If
! you have many units in a layer, they share the same input space and without
! bias would ALL be constrained to pass through the origin. 
  
  Activation functions are needed to introduce nonlinearity into the network.
***************
*** 176,179 ****
--- 176,190 ----
  number of training cases. 
  
+ Bear in mind that with two or more inputs, an MLP with one hidden layer
+ containing just a few units can fit only a limited variety of target
+ functions. Even simple, smooth surfaces such as a Gaussian bump in two
+ dimensions may require 20 to 50 hidden units for a close approximation.
+ Networks with a smaller number of hidden units often produce spurious ridges
+ and valleys in the output surface. Training a network with 20 hidden units
+ will typically require anywhere from 150 to 2500 training cases if you do
+ not use regularization. Hence, if you have a smaller training set than that,
+ it is usually advisable to use some form of regularization rather than to
+ restrict the net to a small number of hidden units. 
+ 
  ------------------------------------------------------------------------
  
***************
*** 391,394 ****
--- 402,410 ----
  Here are a few references: 
  
+ Balakrishnan, P.V., Cooper, M.C., Jacob, V.S., and Lewis, P.A. (1994) "A
+ study of the classification capabilities of neural networks using
+ unsupervised learning: A comparison with k-means clustering", Psychometrika,
+ 59, 509-525. 
+ 
  Bishop, C.M. (1995), _Neural Networks for Pattern Recognition_, Oxford:
  Oxford University Press. 
***************
*** 403,406 ****
--- 419,425 ----
  Bias/Variance Dilemma", Neural Computation, 4, 1-58. 
  
+ Kuan, C.-M. and White, H. (1994), "Artificial Neural Networks: An
+ Econometric Perspective", Econometric Reviews, 13, 1-91. 
+ 
  Kushner, H. & Clark, D. (1978), _Stochastic Approximation Methods for
  Constrained and Unconstrained Systems_, Springer-Verlag. 
***************
*** 413,416 ****
--- 432,442 ----
  Statistical and Probabilistic Aspects_, Chapman & Hall. ISBN 0 412 46530 2. 
  
+ Ripley, B.D. (1994), "Neural Networks and Related Methods for
+ Classification," Journal of the Royal Statistical Society, Series B, 56,
+ 409-456. 
+ 
+ Ripley, B.D. (1996) _Pattern Recognition and Neural Networks_, Cambridge:
+ Cambridge University Press.
+ 
  Sarle, W.S. (1994), "Neural Networks and Statistical Models," Proceedings of
  the Nineteenth Annual SAS Users Group International Conference, Cary, NC:
***************
*** 420,423 ****
--- 446,453 ----
  Perspective," Neural Computation, 1, 425-464. 
  
+ White, H. (1989), "Some Asymptotic Results for Learning in Single Hidden
+ Layer Feedforward Network Models", J. of the American Statistical Assoc.,
+ 84, 1008-1013. 
+ 
  White, H. (1992), _Artificial Neural Networks: Approximation and Learning
  Theory_, Blackwell. 
***************
*** 425,428 ****
  ------------------------------------------------------------------------
  
! Next part is part 3 (of 7). Previous part is part 1. 
  
--- 455,458 ----
  ------------------------------------------------------------------------
  
! Next part is part 3 (of 7). Previous part is part 1. @
  

==> nn3.changes.body <==
*** nn3.oldbody	Tue Jan 30 13:27:10 1996
--- nn3.body	Wed Feb 28 23:00:25 1996
***************
*** 1,3 ****
- 
  Archive-name: ai-faq/neural-nets/part3
  Last-modified: 1996-01-27
--- 1,2 ----
***************
*** 86,100 ****
  Hinton, from the foreword. 
  
- Haykin, S. (1994). Neural Networks, a Comprehensive Foundation. Macmillan,
- New York, NY.
- "A very readable, well written intermediate to advanced text on NNs
- Perspective is primarily one of pattern recognition, estimation and signal
- processing. However, there are well-written chapters on neurodynamics and
- VLSI implementation. Though there is emphasis on formal mathematical models
- of NNs as universal approximators, statistical estimators, etc., there are
- also examples of NNs used in practical applications. The problem sets at the
- end of each chapter nicely complement the material. In the bibliography are
- over 1000 references."
- 
  Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the Theory of
  Neural Computation. Addison-Wesley: Redwood City, California. ISBN
--- 85,88 ----
***************
*** 109,129 ****
  to get through."
  
! Masters,Timothy (1994). Practical Neural Network Recipes in C++. Academic
  Press, ISBN 0-12-479040-2, US $45 incl. disks.
  "Lots of very good practical advice which most other books lack."
  
! Jacek M. Zurada (1992). Introduction To Artificial Neural Systems.
! Hardcover, 785 Pages, 317 Figures, ISBN 0-534-95460-X, 1992, PWS Publishing
! Company, Price: $56.75 (includes shipping, handling, and the ANS software
! diskette). Solutions Manual available.
! Cohesive and comprehensive book on neural nets; as an engineering-oriented
! introduction, but also as a research foundation. Thorough exposition of
! fundamentals, theory and applications. Training and recall algorithms appear
! in boxes showing steps of algorithms, thus making programming of learning
! paradigms easy. Many illustrations and intuitive examples. Winner among NN
! textbooks at a senior UG/first year graduate level-[175 problems] Contents:
! Intro, Fundamentals of Learning, Single-Layer & Multilayer Perceptron NN,
! Assoc. Memories, Self-organizing and Matching Nets, Applications,
! Implementations, Appendix) 
  
  1.) Books for the beginner:
--- 97,114 ----
  to get through."
  
! Masters, Timothy (1994). Practical Neural Network Recipes in C++. Academic
  Press, ISBN 0-12-479040-2, US $45 incl. disks.
  "Lots of very good practical advice which most other books lack."
  
! Ripley, B.D. (1996) Pattern Recognition and Neural Networks, Cambridge:
! Cambridge University Press, ISBN 0-521-46086-7 (hardback), xii+403 pages.
! Brian Ripley's new book is an excellent sequel to Bishop (1995). Ripley
! starts up where Bishop left off, with Bayesian inference and statistical
! decision theory, and then covers some of the same material on NNs as Bishop
! but at a higher mathematical level. Ripley also covers a variety of methods
! that are not discussed, or discussed only briefly, by Bishop, such as
! tree-based methods and belief networks. While Ripley is best appreciated by
! people with a background in mathematical statistics, the numerous realistic
! examples in his book will be of interest even to beginners in neural nets.
  
  1.) Books for the beginner:
***************
*** 160,163 ****
--- 145,159 ----
  World Wide Web.
  
+ Haykin, S. (1994). Neural Networks, a Comprehensive Foundation. Macmillan,
+ New York, NY.
+ "A very readable, well written intermediate text on NNs Perspective is
+ primarily one of pattern recognition, estimation and signal processing.
+ However, there are well-written chapters on neurodynamics and VLSI
+ implementation. Though there is emphasis on formal mathematical models of
+ NNs as universal approximators, statistical estimators, etc., there are also
+ examples of NNs used in practical applications. The problem sets at the end
+ of each chapter nicely complement the material. In the bibliography are over
+ 1000 references."
+ 
  Hecht-Nielsen, R. (1990). Neurocomputing. Addison Wesley. Comments: "A good
  book", "comprises a nice historical overview and a chapter about NN
***************
*** 227,230 ****
--- 223,240 ----
  mathematical background necessary.
  
+ Zurada, Jacek M. (1992). Introduction To Artificial Neural Systems.
+ Hardcover, 785 Pages, 317 Figures, ISBN 0-534-95460-X, 1992, PWS Publishing
+ Company, Price: $56.75 (includes shipping, handling, and the ANS software
+ diskette). Solutions Manual available.
+ "Cohesive and comprehensive book on neural nets; as an engineering-oriented
+ introduction, but also as a research foundation. Thorough exposition of
+ fundamentals, theory and applications. Training and recall algorithms appear
+ in boxes showing steps of algorithms, thus making programming of learning
+ paradigms easy. Many illustrations and intuitive examples. Winner among NN
+ textbooks at a senior UG/first year graduate level-[175 problems]."
+ Contents: Intro, Fundamentals of Learning, Single-Layer & Multilayer
+ Perceptron NN, Assoc. Memories, Self-organizing and Matching Nets,
+ Applications, Implementations, Appendix) 
+ 
  2.) The classics:
  +++++++++++++++++
***************
*** 922,929 ****
  +++++++++++++++++++
  
!    There is a WWW page and an FTP Server for Announcements of Conferences,
!    Workshops and Other Events on Neural Networks at IDIAP in Switzerland.
!    WWW-Server: http://www.idiap.ch/html/idiap-networks.html, FTP-Server: 
!    ftp://ftp.idiap.ch/html/NN-events/, 
  
  9. World Wide Web
--- 932,938 ----
  +++++++++++++++++++
  
!    There is a WWW page for Announcements of Conferences, Workshops and Other
!    Events on Neural Networks at IDIAP in Switzerland. WWW-Server: 
!    http://www.idiap.ch/html/idiap-networks.html. 
  
  9. World Wide Web

==> nn4.changes.body <==
*** nn4.oldbody	Tue Jan 30 13:27:18 1996
--- nn4.body	Wed Feb 28 23:00:29 1996
***************
*** 1,3 ****
- 
  Archive-name: ai-faq/neural-nets/part4
  Last-modified: 1996-01-06
--- 1,2 ----

==> nn5.changes.body <==
*** nn5.oldbody	Tue Jan 30 13:27:23 1996
--- nn5.body	Wed Feb 28 23:00:32 1996
***************
*** 1,3 ****
- 
  Archive-name: ai-faq/neural-nets/part5
  Last-modified: 1996-01-17
--- 1,2 ----

==> nn6.changes.body <==
*** nn6.oldbody	Tue Jan 30 13:27:29 1996
--- nn6.body	Wed Feb 28 23:00:36 1996
***************
*** 1,5 ****
- 
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-01-06
  URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-02-22
  URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 60,66 ****
  ===========
  
! Note for future submissions: Please restrict yourself to 60 lines length and
! avoid marketing hype. Please send a HTML-formatted version if at all
! possible. 
  
  The following simulators are described below: 
--- 59,71 ----
  ===========
  
! Note for future submissions: Please restrict product descriptions to a
! maximum of 60 lines of 72 characters, and send an HTML-formatted version if
! possible. If you include the standard header (name, company, address, etc.),
! you need not count the header in the 60 line maximum.Try to make the
! descriptions objective, and avoid making implicit or explicit assertions
! about competing products, such as "Our product is the *only* one that does
! so-and-so." The FAQ maintainer reserves the right to remove excessive
! marketing hype and to edit submissions to conform to size requirements; if
! he is in a good mood, he may also correct your spelling and punctuation. 
  
  The following simulators are described below: 
***************
*** 87,94 ****
  20. havFmNet++ 
  21. IBM Neural Network Utility 
! 22. NeuroGenetic Optimizer (NGO) 
  23. WAND 
  24. Atree 3.0 Adaptive Logic Network 
  25. TDL v. 1.1 (Trans-Dimensional Learning) 
  
  1. nn/xnn
--- 92,102 ----
  20. havFmNet++ 
  21. IBM Neural Network Utility 
! 22. NeuroGenetic Optimizer (NGO) Version 2.0 
  23. WAND 
  24. Atree 3.0 Adaptive Logic Network 
  25. TDL v. 1.1 (Trans-Dimensional Learning) 
+ 26. NeurOn-Line 
+ 27. NeuFrame, NeuroFuzzy, NeuDesk and NeuRun 
+ 28. OWL Neural Network Library (TM) 
  
  1. nn/xnn
***************
*** 102,106 ****
       Phone: +47-55544163 / +47-55201548
       Email: arnemo@ii.uib.no
!        WWW: http://www.ii.uib.no/~arnemo/neureka/neureka.html
  
     Basic capabilities:
--- 110,114 ----
       Phone: +47-55544163 / +47-55201548
       Email: arnemo@ii.uib.no
!        URL: http://www.ii.uib.no/~arnemo/neureka/neureka.html
  
     Basic capabilities:
***************
*** 746,750 ****
     FAX:    (904) 338-6779
     Email:  info@nd.com
!    WWW:    http://www.nd.com/
  
  16. Qnet For Windows Version 2.0
--- 754,758 ----
     FAX:    (904) 338-6779
     Email:  info@nd.com
!    URL:    http://www.nd.com/
  
  16. Qnet For Windows Version 2.0
***************
*** 999,1045 ****
     information, send a note to nninfo@vnet.ibm.com. 
  
! 22. NeuroGenetic Optimizer (NGO)
! ++++++++++++++++++++++++++++++++
! 
!    An easy to use Microsoft Windows based neural network development system
!    that employs Genetic Algorithms (GA's) to select input variables, neural
!    network models and evolve neural network structures to maximize accuracy.
!    Supports functional modeling, classification, diagnosis and time series
!    prediction using fast back propagation and Continuously Adapting Time
!    Neural Networks (CATNN).  Additional paradigms in development.
! 
!    Used globally for financial prediction, process/quality modeling, medical
!    diagnosis, autonomous robotics, marketing data analysis and more.
!    Automates the neural network development process freeing you from all the
!    trial and error effort.  A well behaved Windows application that is
!    a real time saver as well as delivering high accuracy networks easily.
! 
!    Platforms: Windows 3.1, Windows '95 and Windows NT 3.5.1 workstation
! 
!    Associated products:
!       Penney(tm)    - A Microsoft Excel 5 Add-in for using NGO developed neural
!                       networks in Excel. Process data, visualize response
!                       curves and surfaces, build Excel applications
!       ExamiNeur(tm) - Process data, visualize response curves and using the
!                       TargetSeeker (tm) functionality, find input values that
!                       generate a desired output(s).
!       Developer's Tool Kit - Programming API for use of NGO run times
!                       (neural and genetic) in your own Windows applications.
!                       Supports 100 neural models simultaneously.
! 
!       Upgrade subcription available to keep you current with the 3-4
!       releases annually.
  
!    Call, write or email for pricing.
  
-    Contact:
     BioComp Systems, Inc.
     2871 152nd Avenue N.E.
!    Redmond, WA  98052
!    USA
!    1-800-716-6770 (US/Canada)
!    1-206-869-6770 (local/Int'l)
!    1-206-869-6850 (fax)
!    biocomp@biocomp.seanet.com
  
  23. WAND
--- 1007,1071 ----
     information, send a note to nninfo@vnet.ibm.com. 
  
! 22. NeuroGenetic Optimizer (NGO) Version 2.0
! ++++++++++++++++++++++++++++++++++++++++++++
  
!    BioComp's leading product is the NeuroGenetic Optimizer, or NGO.  As 
!    the name suggests, the NGO is a neural network development tool that 
!    uses genetic algorithms to optimize the inputs and structure of a neural 
!    network. Without the NGO, building neural networks can be tedious and 
!    time consuming even for an expert.  For example, in a relatively simple 
!    neural network, the number of possible combination of inputs and neural 
!    network structures can be easily over 100 billion.  The difference 
!    between an average network and an optimum network is substantial.  The 
!    NGO searches for optimal neural network solutions. See our web page at 
!    http://www.bio-comp.com. for a demo that you can download and try out.  
!    Our customers who have used other neural network development tools are 
!    delighted with both the ease of use of the NGO and the quality to their 
!    results.
! 
!    BioComp Systems, Inc. introduced version 1.0 of the NGO in January of 
!    1995 and now proudly announces version 2.0.  With version 2.0, the NGO 
!    is now equipped for predicting time-based information such as product 
!    sales, financial markets and instruments, process faults, etc., in 
!    addition to its current capabilities in functional modeling, 
!    classification, and diagnosis.
! 
!    While the NGO embodies sophisticated genetic algorithm search and neural 
!    network modeling technology, it has a very easy to use GUI interface for 
!    Microsoft Windows.  You don't have to know or understand the underlying 
!    technology to build highly effective financial models.  On the other 
!    hand, if you like to work with the technology, the NGO is highly 
!    configurable to customize the NGO to your liking.
! 
!    Key new features of the NGO include:
!    * Highly effective "Continuous Adaptive Time", Time Delay and lagged 
!    input Back Propagation neural networks with optional recurrent outputs, 
!    automatically competing and selected based on predictive accuracy.
!    * Walk Forward Train/Test/Validation model evaluation for assuring model 
!    robustness,
!    * Easy input data lagging for Back Propagation neural models,
!    * Neural transfer functions and techniques that assure proper 
!    extrapolation of predicted variables to new highs,
!    * Confusion matrix viewing of Predicted vs. Desired results,
!    * Exportation of models to Excel 5.0 (Win 3.1) or Excel 7.0 (Win'95/NT) 
!    through an optional Excel Add-In
!    * Five accuracies to choose from including; Relative Accuracy, 
!    R-Squared, Mean Squared Error (MSE), Root Mean Square (RMS) Error and 
!    Average Absolute error.
! 
!    With version 2.0, the NGO is now available as a full 32 bit application 
!    for Windows '95 and Windows NT to take advantage of the 32 bit 
!    preemptive multitasking power of those platforms. A 16 bit version for 
!    Windows 3.1 is also available.  Customized professional server based 
!    systems are also available for high volume automated model generation 
!    and prediction. Prices start at $195.
  
     BioComp Systems, Inc.
+    Overlake Business Center
     2871 152nd Avenue N.E.
!    Redmond, WA 98052, USA
!    1-800-716-6770 (US/Canada voice)=20  1-206-869-6770 (Local/Int'l voice)
!    1-206-869-6850 (Fax)                 http://www.bio-comp.com.
!    biocomp@biocomp.seanet.com           70673.1554@compuserve.com
  
  23. WAND
***************
*** 1180,1185 ****
     (15) A DEMONSTRATION of TDL can be invoked when initially starting the program.
  
  ------------------------------------------------------------------------
  
! Next part is part 7 (of 7). Previous part is part 5. @
  
--- 1206,1373 ----
     (15) A DEMONSTRATION of TDL can be invoked when initially starting the program.
  
+ 26. NeurOn-Line
+ +++++++++++++++
+ 
+    Built on Gensym Corp.'s G2(r), Gensym's NeurOn-Line(r) is a graphical,
+    object-oriented software product that enables users to easily build
+    neural networks and integrate them into G2 applications. NeurOn-Line is
+    well suited for advanced control, data and sensor validation, pattern
+    recognition, fault classification, and multivariable quality control
+    applications. Gensym's NeurOn-Line provides neural net training and
+    on-line deployment in a single, consistent environment. NeurOn-Line's
+    visual programming environment provides pre-defined blocks of neural net
+    paradigms that have been extended with specific features for real-time
+    process control applications. These include: Backprop, Radial Basis
+    Function, Rho, and Autoassociative networks. For more information on
+    Gensym software, visit their home page at http://www.gensym.com. 
+ 
+ 27. NeuFrame, NeuroFuzzy, NeuDesk and NeuRun
+ ++++++++++++++++++++++++++++++++++++++++++++
+ 
+       Name: NeuFrame, NeuroFuzzy, NeuDesk and NeuRun
+    Company: NCS
+    Address: Unit 3
+             Lulworth Business Centre
+             Totton
+             Southampton
+             UK
+             SO40 3WW
+      Phone: +44 (0)1703 667775
+        Fax: +44 (0)1703 663730
+      Email: robby@ncs-skylake.co.uk
+        URL: http://www.demon.co.uk/skylake/software.html
+ 
+    NeuFrame
+    NeuFrame provides an easy-to-use, visual, object-oriented approach to
+    problem solving using intelligence technologies, including nneural
+    networks and neurofuzzy techniques. It provides features to enable
+    businesses to investigate and apply intelligence technologies from
+    initial experimentation through to building embedded implementations
+    using software components.
+    * Minimum Configuration- Windows 3.1 with win32s, 386DX 33MHz, 8Mb
+      memory, 5Mb free disk space,VGA graphics, mouse
+    * Recommended Configuration - Windows 95/NT 486DX 50MHz or above 16Mb
+      memory,150Mb or above hard disc, VGA graphics, Mouse.
+    * Price Commercial 749 (Pounds Sterling), Educational 435
+ 
+    NeuroFuzzy
+    This is an optional module for use within the NeuFrame enviroment. Fuzzy
+    logic differs from neural networks in the sense that neural systems are
+    constructed purely from available data whereas fuzzy systems are expert
+    designed. The relative merits of the two approaches is very much
+    application dependent as it relies on the availability of training data
+    and expert knowledge. Conventional knowledge based systems can also be
+    used to represent expert knowledge within computer systems, but fuzzy
+    logic provides a richer representation. Benefits include:
+    i)Combines the benefits of rule based fuzzy logic and learning from
+    experience of neural networks.
+    ii) Automatically generate optimised neurofuzzy models.
+    iii) Encapsulate expert fuzzy knowledge within the model.
+    iv) Fuzzy models may be modified or created according to the data
+    presented to the network.
+    v) Includes a pure Fuzzy logic editor .
+    * Requires NeuFrame
+    * Price Commercial 249 (Pounds Sterling), Educational 149
+ 
+    NeuDesk
+    NeuDesk 2 makes the implementation of a neural network solution very
+    accessible.  Running within the Windows 3 environment, NeuDesk 2 is easy
+    for non-specialists to use.  Data required for training the neural
+    network can be entered manually, by cut and paste from any other Windows
+    application, or imported from a number of different file formats.
+    Training the network is achieved by a few straightforward
+    point-and-click operations.
+    * Recommended Configuration - Microsoft Windows 3.0 or higher with a
+      minimum of 2Mb RAM and 20Mbyte hard disk and mouse. 4Mb RAM, 387
+      co-processor and a 40Mb hard disk are recommended.
+    * Price Commercial 349 (Pounds Sterling), Educational 149
+ 
+    NeuRun
+    NeuRun provides for the embedding of intelligence technologies developed
+    in NeuFrame or NeuDesk to be embedded into other programs and
+    environments. NeuRun simplifies and speeds up the process of embedding
+    neural networks in your favourite Windows applications to provide
+    on-line intelligence for decisions, predictions, suggestions or
+    classifications. Implementations are easy to duplicate and deploy and
+    can be readily updated if the problem conditions change over time.
+    Typical of the application programs that can be enhanced using NeuRun 3
+    are:  Excel, Microsoft's spreadsheet for creating powerful data
+    manipulations and analysis applications and Visual Basic which can be
+    used to generate user defined screens and functions for custom operator
+    interfaces, data entry forms, control panels etc.
+    * Price Commercial 50 (Pounds Sterling), Educational 50
+ 
+ 28. OWL Neural Network Library (TM)
+ +++++++++++++++++++++++++++++++++++
+ 
+       Name: OWL Neural Network Library (TM)
+    Company: HyperLogic Corporation
+    Address: PO Box 300010
+             Escondido, CA 92030
+             USA
+      Phone: +1 619-746-2765 
+        Fax: +1 619-746-4089
+      Email: prodinfo@hyperlogic.com 
+        URL: http://www.hyperlogic.com/hl
+ 
+    The OWL Neural Network Library provides a set of popular networks in
+    the form of a programming library for C or C++ software development.
+    The library is designed to support engineering applications as well as
+    academic research efforts.
+ 
+    A common programming interface allows consistent access to the various
+    paradigms. The programming environment consists of functions for
+    creating, deleting, training, running, saving, and restoring networks,
+    accessing node states and weights, randomizing weights, reporting the
+    complete network state in a printable ASCII form, and formatting
+    detailed error message strings.
+ 
+    The library includes 20 neural network paradigms, and facilitates the
+    construction of others by concatenation of simpler networks. Networks
+    included are:
+ 
+    * Adaptive Bidirectional Associative Memories (ABAM), including
+      stochastic versions (RABAM). Five paradigms in all.
+    * Discrete Bidirectional Associative Memory (BAM), with individual
+      bias weights for increased pattern capacity.
+    * Multi-layer Backpropagation with many user controls such as batching,
+      momentum, error propagation for network concatenation, and optional
+      computation of squared error. A compatible, non-learning integer
+      fixed-point version is included. Two paradigms in all.
+    * Nonadaptive Boltzmann Machine and Discrete Hopfield Circuit.
+    * "Brain-State-in-a-Box" autoassociator.
+    * Competitive Learning Networks: Classical, Differential, and 
+      "Conscience" version. Three paradigms in all.
+    * Fuzzy Associative Memory (FAM).
+    * "Hamming Network", a binary nearest-neighbor classifier.
+    * Generic Inner Product Layer with user-defined signal function.
+    * "Outstar Layer" learns time-weighted averages. This network,
+      concatenated with Competitive Learning, yields the
+      "Counterpropagation" network.
+    * "Learning Logic" gradient descent network, due to David Parker.
+    * Temporal Access Memory, a unidirectional network useful for
+      recalling binary pattern sequences.
+    * Temporal Pattern Network, for learning time-sequenced binary patterns.
+ 
+    Supported Environments:
+      The object code version of OWL is provided on MS-DOS format diskettes
+      with object libraries and makefiles for both Borland and Microsoft C.
+      An included Windows DLL supports OWL development under Windows. The
+      package also includes Owgraphics, a mouseless graphical user interface
+      support library for DOS.
+ 
+      Both graphical and "stdio" example programs are included.
+ 
+      The Portable Source Code version of OWL compiles without change on
+      many hosts, including VAX, UINX, and Transputer. The source code
+      package includes the entire object-code package.
+ 
+    Price:
+      USA and Canada: (US) $295 object code, (US) $995 with source
+      Outside USA and Canada: (US) $350 object code, (US) $1050 with source
+      Shipping, taxes, duties, etc., are the responsibility of the customer.
+ 
  ------------------------------------------------------------------------
  
! Next part is part 7 (of 7). Previous part is part 5. 
  

==> nn7.changes.body <==
*** nn7.oldbody	Tue Jan 30 13:27:34 1996
--- nn7.body	Wed Feb 28 23:00:40 1996
***************
*** 1,3 ****
- 
  Archive-name: ai-faq/neural-nets/part7
  Last-modified: 1996-01-06
--- 1,2 ----
-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
