Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.kei.com!uhog.mit.edu!news.mtholyoke.edu!world!mv!barney.gvi.net!redstone.interpath.net!sas!mozart.unx.sas.com!hotellng.unx.sas.com!saswss
From: saswss@unx.sas.com (Warren Sarle)
Subject: changes to "comp.ai.neural-nets FAQ" -- monthly posting
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <nn.changes.posting_823026455@hotellng.unx.sas.com>
Supersedes: <nn.changes.posting_820193076@hotellng.unx.sas.com>
Date: Tue, 30 Jan 1996 18:27:37 GMT
Expires: Tue, 5 Mar 1996 18:27:35 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
Reply-To: saswss@unx.sas.com (Warren Sarle)
Organization: SAS Institute Inc., Cary, NC, USA
Keywords: modifications, new, additions, deletions
Followup-To: comp.ai.neural-nets
Lines: 1147

==> nn1.changes.body <==
*** nn1.oldbody	Thu Dec 28 18:24:16 1995
--- nn1.body	Tue Jan 30 13:26:44 1996
***************
*** 1,16 ****
  
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ1
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
  
!   ------------------------------------------------------------------------
!         Additions, corrections, or improvements are always welcome.
!         Anybody who is willing to contribute any information,
!         please email me; if it is relevant, I will incorporate it.
  
!         The monthly posting departs at the 28th of every month.
!   ------------------------------------------------------------------------
  
  
--- 1,16 ----
  
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-01-06
! URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
  
!   ---------------------------------------------------------------
!     Additions, corrections, or improvements are always welcome.
!     Anybody who is willing to contribute any information,
!     please email me; if it is relevant, I will incorporate it.
  
!     The monthly posting departs at the 28th of every month.
!   ---------------------------------------------------------------
  
  
***************
*** 19,25 ****
  where it should be findable at any time). Its purpose is to provide basic
  information for individuals who are new to the field of neural networks or
! are just beginning to read this group. It shall help to avoid lengthy
! discussion of questions that usually arise for beginners of one or the other
! kind. 
  
     SO, PLEASE, SEARCH THIS POSTING FIRST IF YOU HAVE A QUESTION
--- 19,24 ----
  where it should be findable at any time). Its purpose is to provide basic
  information for individuals who are new to the field of neural networks or
! who are just beginning to read this group. It will help to avoid lengthy
! discussion of questions that often arise for beginners. 
  
     SO, PLEASE, SEARCH THIS POSTING FIRST IF YOU HAVE A QUESTION
***************
*** 27,30 ****
--- 26,33 ----
     DON'T POST ANSWERS TO FAQs: POINT THE ASKER TO THIS POSTING
  
+ The latest version of the FAQ is available as a hypertext document, readable
+ by any WWW (World Wide Web) browser such as Mosaic, under the URL: 
+ "ftp://ftp.sas.com/pub/neural/FAQ.html".
+ 
  These postings are archived in the periodic posting archive on host
  rtfm.mit.edu (and on some other hosts as well). Look in the anonymous ftp
***************
*** 35,54 ****
  lines for more information.
  
! The FAQ is also available by anonymous ftp from ftp.sas.com (Internet
! gateway IP 192.35.83.8) in the directory /pub/neural under the file names
! "FAQ1", "FAQ2", ... "FAQ7".
! 
! For those of you who read this posting anywhere other than in
! comp.ai.neural-nets: To read comp.ai.neural-nets (or post articles to it)
! you need Usenet News access. Try the commands, 'xrn', 'rn', 'nn', or 'trn'
! on your Unix machine, 'news' on your VMS machine, or ask a local guru.
! 
! An older copy of the monthly posting is available as a hypertext document in
! WWW (World Wide Web) under the URL 
! "http://wwwipd.ira.uka.de/~prechelt/FAQ/neural-net-faq.html". I am trying to
! provide the current version on the company WWW server but legal negotiations
! are still underway.
  
! The monthly posting is not meant to discuss any topic exhaustively.
  
  Disclaimer: 
--- 38,47 ----
  lines for more information.
  
! For those of you who read this FAQ anywhere other than in Usenet: To read
! comp.ai.neural-nets (or post articles to it) you need Usenet News access.
! Try the commands, 'xrn', 'rn', 'nn', or 'trn' on your Unix machine, 'news'
! on your VMS machine, or ask a local guru.
  
! This FAQ is not meant to discuss any topic exhaustively.
  
  Disclaimer: 
***************
*** 56,60 ****
     This posting is provided 'as is'. No warranty whatsoever is expressed or
     implied, in particular, no warranty that the information contained herein
!    is correct or useful in any way, although both is intended. 
  
  To find the answer of question "x", search for the string "Subject: x"
--- 49,53 ----
     This posting is provided 'as is'. No warranty whatsoever is expressed or
     implied, in particular, no warranty that the information contained herein
!    is correct or useful in any way, although both are intended. 
  
  To find the answer of question "x", search for the string "Subject: x"
***************
*** 122,126 ****
  +++++++++++
  
!    Requests are articles of the form "I am looking for X" where X
     is something public like a book, an article, a piece of software. The
     most important about such a request is to be as specific as possible!
--- 115,119 ----
  +++++++++++
  
!    Requests are articles of the form "I am looking for X", where X
     is something public like a book, an article, a piece of software. The
     most important about such a request is to be as specific as possible!
***************
*** 159,168 ****
     answers should thus be e-mailed to the poster).
  
!    The subject lines of answers are automatically adjusted by the news
!    software. Note that sometimes longer threads of discussion evolve from an
!    answer to a question or request. In this case posters should change the
!    subject line suitably as soon as the topic goes too far away from the one
!    announced in the original subject line. You can still carry along the old
!    subject in parentheses in the form "Subject: new subject
     (was: old subject)" 
  
--- 152,162 ----
     answers should thus be e-mailed to the poster).
  
!    Most news-reader software automatically provides a subject line beginning
!    with "Re:" followed by the subject of the article which is being
!    followed-up. Note that sometimes longer threads of discussion evolve from
!    an answer to a question or request. In this case posters should change
!    the subject line suitably as soon as the topic goes too far away from the
!    one announced in the original subject line. You can still carry along the
!    old subject in parentheses in the form "Subject: new subject
     (was: old subject)" 
  
***************
*** 186,190 ****
      o simple concatenation of all the answers is not enough: instead,
        redundancies, irrelevancies, verbosities, and errors should be
!       filtered out (as good as possible) 
      o the answers should be separated clearly 
      o the contributors of the individual answers should be identifiable
--- 180,184 ----
      o simple concatenation of all the answers is not enough: instead,
        redundancies, irrelevancies, verbosities, and errors should be
!       filtered out (as well as possible) 
      o the answers should be separated clearly 
      o the contributors of the individual answers should be identifiable
***************
*** 228,233 ****
  
     If somebody explicitly wants to start a discussion, he/she can do so by
!    giving the posting a subject line of the form "Subject:
!    Discussion: this-and-that"
  
     It is quite difficult to keep a discussion from drifting into chaos, but,
--- 222,227 ----
  
     If somebody explicitly wants to start a discussion, he/she can do so by
!    giving the posting a subject line of the form " Discussion:
!    this-and-that"
  
     It is quite difficult to keep a discussion from drifting into chaos, but,
***************
*** 250,254 ****
  A vague description is as follows:
  
! An ANN is a network of many very simple processors ("units"), each possibly
  having a (small amount of) local memory. The units are connected by
  unidirectional communication channels ("connections"), which carry numeric
--- 244,248 ----
  A vague description is as follows:
  
! An ANN is a network of many simple processors ("units"), each possibly
  having a (small amount of) local memory. The units are connected by
  unidirectional communication channels ("connections"), which carry numeric
***************
*** 286,290 ****
  In practice, NNs are especially useful for mapping problems which are
  tolerant of some errors, have lots of example data available, but to which
! hard and fast rules can not easily be applied. NNs are, at least today,
  difficult to apply successfully to problems that concern manipulation of
  symbols and memory. 
--- 280,284 ----
  In practice, NNs are especially useful for mapping problems which are
  tolerant of some errors, have lots of example data available, but to which
! hard and fast rules cannot easily be applied. NNs are, at least today,
  difficult to apply successfully to problems that concern manipulation of
  symbols and memory. 
***************
*** 300,306 ****
     information processing with neural nets and about learning systems in
     general. 
!  o Engineers of many kinds want to exploit the capabilities of neural
!    networks on many areas (e.g. signal processing) to solve their
!    application problems. 
   o Cognitive scientists view neural networks as a possible apparatus to
     describe models of thinking and conscience (High-level brain function). 
--- 294,301 ----
     information processing with neural nets and about learning systems in
     general. 
!  o Statisticians use neural nets as flexible, nonlinear regression and
!    classification models. 
!  o Engineers of many kinds exploit the capabilities of neural networks in
!    many areas, such as signal processing and automatic control. 
   o Cognitive scientists view neural networks as a possible apparatus to
     describe models of thinking and conscience (High-level brain function). 

==> nn2.changes.body <==
*** nn2.oldbody	Thu Dec 28 18:24:19 1995
--- nn2.body	Tue Jan 30 13:26:55 1996
***************
*** 1,6 ****
  
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ2
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
--- 1,6 ----
  
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-01-27
! URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
***************
*** 60,83 ****
  ===============
  
! 'Backprop' is an abbreviation for 'backpropagation of error' which is the
! most widely used learning method for neural networks today. Although it has
! many disadvantages, which could be summarized in the sentence "You are
! almost not knowing what you are actually doing when using backpropagation"
! :-) it has pretty much success on practical applications and is relatively
! easy to apply.
! 
! It is for the training of layered (i.e., nodes are grouped in layers)
! feedforward (i.e., the arcs joining nodes are unidirectional, and there are
! no cycles) nets (often called "multi layer perceptrons").
! 
! Back-propagation needs a teacher that knows the correct output for any input
! ("supervised learning") and uses gradient descent on the error (as provided
! by the teacher) to train the weights. The activation function is (usually) a
! sigmoidal (i.e., bounded above and below, but differentiable) function of a
! weighted sum of the nodes inputs.
! 
! The use of a gradient descent algorithm to train its weights makes it slow
! to train; but being a feedforward algorithm, it is quite rapid during the
! recall phase.
  
  Literature:
--- 60,74 ----
  ===============
  
! Backprop is short for backpropagation of error. The term backpropagation
! causes much confusion. Strictly speaking, backpropagation refers to the
! method for computing the error gradient for a feedforward network, a
! straightforward but elegant application of the chain rule of elementary
! calculus. By extension, backpropagation or backprop refers to a training
! method that uses backpropagation to compute the gradient. By further
! extension, a backprop network is a feedforward network trained by
! backpropagation. Standard backprop is a euphemism for the generalized delta
! rule, the training algorithm that was popularized by Rumelhart, Hinton, and
! Williams in chapter 8 of Rumelhart and McClelland (1986) and that remains
! the most widely used supervised training method for neural nets. 
  
  Literature:
***************
*** 95,101 ****
  the training patterns, including all of their peculiarities. However, one is
  usually interested in the generalization of the network, i.e., the error it
! exhibits on examples NOT seen during training. Learning the peculiarities of
  the training set makes the generalization worse. The network should only
! learn the general structure of the examples. 
  
  There are various methods to fight overfitting. The two most important
--- 86,92 ----
  the training patterns, including all of their peculiarities. However, one is
  usually interested in the generalization of the network, i.e., the error it
! exhibits on cases NOT seen during training. Learning the peculiarities of
  the training set makes the generalization worse. The network should only
! learn the general structure of the training cases. 
  
  There are various methods to fight overfitting. The two most important
***************
*** 103,110 ****
  and early stopping. Regularization methods try to limit the complexity of
  the network such that it is unable to learn peculiarities. Early stopping
! aims at stopping the training at the point of optimal generalization. A
! description of the early stopping method can for instance be found in
! section 3.3 of /pub/papers/techreports/1994-21.ps.gz on ftp.ira.uka.de
! (anonymous ftp). 
  
  ------------------------------------------------------------------------
--- 94,101 ----
  and early stopping. Regularization methods try to limit the complexity of
  the network such that it is unable to learn peculiarities. Early stopping
! aims at stopping the training at the point of optimal generalization by
! dividing the available data into training and validation sets. A description
! of the early stopping method can for instance be found in section 3.3 of 
! /pub/papers/techreports/1994-21.ps.gz on ftp.ira.uka.de (anonymous ftp). 
  
  ------------------------------------------------------------------------
***************
*** 142,158 ****
  =============================================
  
- There is no way to determine a good network topology just from the number of
- inputs and outputs. It depends critically on the number of training examples
- and the complexity of the classification you are trying to learn. There are
- problems with one input and one output that require millions of hidden
- units, and problems with a million inputs and a million outputs that require
- only one hidden unit, or none at all.
  Some books and articles offer "rules of thumb" for choosing a topopology --
! Ninputs plus Noutputs dividied by two, maybe with a square root in there
! somewhere -- but such rules are total garbage. Other rules relate to the
! number of examples available: Use at most so many hidden units that the
! number of weights in the network times 10 is smaller than the number of
! examples. Such rules are only concerned with overfitting and are unreliable
! as well. 
  
  ------------------------------------------------------------------------
--- 133,178 ----
  =============================================
  
  Some books and articles offer "rules of thumb" for choosing a topopology --
! Ninputs plus Noutputs divided by two, maybe with a square root in there
! somewhere -- but such rules are total garbage. There is no way to determine
! a good network topology just from the number of inputs and outputs. It
! depends critically on the number of training cases, the amount of noise, and
! the complexity of the function or classification you are trying to learn.
! There are problems with one input and one output that require thousands of
! hidden units, and problems with a thousand inputs and a thousand outputs
! that require only one hidden unit, or none at all.
! 
! Other rules relate to the number of cases available: use at most so many
! hidden units that the number of weights in the network times 10 is smaller
! than the number of cases. Such rules are only concerned with overfitting and
! are unreliable as well. All one can say is that if the number of training
! cases is much larger (but no one knows exactly how much larger) than the
! number of weights, you are unlikely to get overfitting, but you may suffer
! from underfitting.
! 
! An intelligent choice of the number of hidden units depends on whether you
! are using early stopping or some other form of regularization. If not, you
! must simply try many networks with different numbers of hidden units,
! estimate the generalization error for each one, and choose the network with
! the minumum estimated generalization error. However, there is little point
! in trying a network with more weights than training cases, since such a
! large network is almost sure to overfit.
! 
! If you are using early stopping, it is essential to use lots of hidden units
! to avoid bad local optima. There seems to be no upper limit on the number of
! hidden units, other than that imposed by computer time and memory
! requirements. But there also seems to be no advantage to using more hidden
! units than you have training cases, since bad local minima do not occur with
! so many hidden units.
! 
! If you are using weight decay or Bayesian estimation, you can also use lots
! of hidden units. However, it is not strictly necessary to do so, because
! other methods are available to avoid local minima, such as multiple random
! starts and simulated annealing (such methods are not safe to use with early
! stopping). You can use one network with lots of hidden units, or you can try
! different networks with different numbers of hidden units, and choose on the
! basis of estimated generalization error. With weight decay or MAP Bayesian
! estimation, it is prudent to keep the number of weights less than half the
! number of training cases. 
  
  ------------------------------------------------------------------------
***************
*** 364,368 ****
--- 384,396 ----
  instead of ordinary least squares to obtain more efficient estimates. 
  
+ Communication between statisticians and neural net researchers is often
+ hindered by the different terminology used in the two fields. There is a
+ comparison of neural net and statistical jargon in 
+ ftp://ftp.sas.com/pub/neural/jargon 
+ 
  Here are a few references: 
+ 
+ Bishop, C.M. (1995), _Neural Networks for Pattern Recognition_, Oxford:
+ Oxford University Press. 
  
  Chatfield, C. (1993), "Neural networks: Forecasting breakthrough or passing

==> nn3.changes.body <==
*** nn3.oldbody	Thu Dec 28 18:24:23 1995
--- nn3.body	Tue Jan 30 13:27:02 1996
***************
*** 1,6 ****
  
  Archive-name: ai-faq/neural-nets/part3
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ3
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
--- 1,6 ----
  
  Archive-name: ai-faq/neural-nets/part3
! Last-modified: 1996-01-27
! URL: ftp://ftp.sas.com/pub/neural/FAQ3.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
***************
*** 60,82 ****
  =========
  
! 0.) The best (subjectively, of course -- please don't flame me):
! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  
  Haykin, S. (1994). Neural Networks, a Comprehensive Foundation. Macmillan,
! New York, NY. "A very readable, well written intermediate to advanced text
! on NNs Perspective is primarily one of pattern recognition, estimation and
! signal processing. However, there are well-written chapters on neurodynamics
! and VLSI implementation. Though there is emphasis on formal mathematical
! models of NNs as universal approximators, statistical estimators, etc.,
! there are also examples of NNs used in practical applications. The problem
! sets at the end of each chapter nicely complement the material. In the
! bibliography are over 1000 references. If one buys only one book on neural
! networks, this should be it."
  
  Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the Theory of
  Neural Computation. Addison-Wesley: Redwood City, California. ISBN
! 0-201-50395-6 (hardbound) and 0-201-51560-1 (paperbound) Comments: "My first
! impression is that this one is by far the best book on the topic. And it's
! below $30 for the paperback."; "Well written, theoretical (but not
  overwhelming)"; It provides a good balance of model development,
  computational algorithms, and applications. The mathematical derivations are
--- 60,105 ----
  =========
  
! 0.) The best (you can flame me if you do it entertainingly):
! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  
+ Bishop, C.M. (1995). Neural Networks for Pattern Recognition, Oxford: Oxford
+ University Press. ISBN 0-19-853849-9 (hardback) or 0-19-853864-2
+ (paperback), xvii+482 pages.
+ This is definitely the best book on neural nets for practical applications
+ (rather than for neurobiological models). It is the only textbook on neural
+ nets that I have seen that is statistically solid.
+ "Bishop is a leading researcher who has a deep understanding of the material
+ and has gone to great lengths to organize it in a sequence that makes sense.
+ He has wisely avoided the temptation to try to cover everything and has
+ therefore omitted interesting topics like reinforcement learning, Hopfield
+ networks, and Boltzmann machines in order to focus on the types of neural
+ networks that are most widely used in practical applications. He assumes
+ that the reader has the basic mathematical literacy required for an
+ undergraduate science degree, and using these tools he explains everything
+ from scratch. Before introducing the multilayer perceptron, for example, he
+ lays a solid foundation of basic statistical concepts. So the crucial
+ concept of overfitting is introduced using easily visualized examples of
+ one-dimensional polynomials and only later applied to neural networks. An
+ impressive aspect of this book is that it takes the reader all the way from
+ the simplest linear models to the very latest Bayesian multilayer neural
+ networks without ever requiring any great intellectual leaps." -Geoffrey
+ Hinton, from the foreword. 
+ 
  Haykin, S. (1994). Neural Networks, a Comprehensive Foundation. Macmillan,
! New York, NY.
! "A very readable, well written intermediate to advanced text on NNs
! Perspective is primarily one of pattern recognition, estimation and signal
! processing. However, there are well-written chapters on neurodynamics and
! VLSI implementation. Though there is emphasis on formal mathematical models
! of NNs as universal approximators, statistical estimators, etc., there are
! also examples of NNs used in practical applications. The problem sets at the
! end of each chapter nicely complement the material. In the bibliography are
! over 1000 references."
  
  Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the Theory of
  Neural Computation. Addison-Wesley: Redwood City, California. ISBN
! 0-201-50395-6 (hardbound) and 0-201-51560-1 (paperbound)
! "My first impression is that this one is by far the best book on the topic.
! And it's below $30 for the paperback."; "Well written, theoretical (but not
  overwhelming)"; It provides a good balance of model development,
  computational algorithms, and applications. The mathematical derivations are
***************
*** 87,92 ****
  
  Masters,Timothy (1994). Practical Neural Network Recipes in C++. Academic
! Press, ISBN 0-12-479040-2, US $45 incl. disks. "Lots of very good practical
! advice which most other books lack."
  
  Jacek M. Zurada (1992). Introduction To Artificial Neural Systems.
--- 110,115 ----
  
  Masters,Timothy (1994). Practical Neural Network Recipes in C++. Academic
! Press, ISBN 0-12-479040-2, US $45 incl. disks.
! "Lots of very good practical advice which most other books lack."
  
  Jacek M. Zurada (1992). Introduction To Artificial Neural Systems.
***************
*** 93,105 ****
  Hardcover, 785 Pages, 317 Figures, ISBN 0-534-95460-X, 1992, PWS Publishing
  Company, Price: $56.75 (includes shipping, handling, and the ANS software
! diskette). Solutions Manual available. Cohesive and comprehensive book on
! neural nets; as an engineering-oriented introduction, but also as a research
! foundation. Thorough exposition of fundamentals, theory and applications.
! Training and recall algorithms appear in boxes showing steps of algorithms,
! thus making programming of learning paradigms easy. Many illustrations and
! intuitive examples. Winner among NN textbooks at a senior UG/first year
! graduate level-[175 problems] Contents: Intro, Fundamentals of Learning,
! Single-Layer & Multilayer Perceptron NN, Assoc. Memories, Self-organizing
! and Matching Nets, Applications, Implementations, Appendix) 
  
  1.) Books for the beginner:
--- 116,129 ----
  Hardcover, 785 Pages, 317 Figures, ISBN 0-534-95460-X, 1992, PWS Publishing
  Company, Price: $56.75 (includes shipping, handling, and the ANS software
! diskette). Solutions Manual available.
! Cohesive and comprehensive book on neural nets; as an engineering-oriented
! introduction, but also as a research foundation. Thorough exposition of
! fundamentals, theory and applications. Training and recall algorithms appear
! in boxes showing steps of algorithms, thus making programming of learning
! paradigms easy. Many illustrations and intuitive examples. Winner among NN
! textbooks at a senior UG/first year graduate level-[175 problems] Contents:
! Intro, Fundamentals of Learning, Single-Layer & Multilayer Perceptron NN,
! Assoc. Memories, Self-organizing and Matching Nets, Applications,
! Implementations, Appendix) 
  
  1.) Books for the beginner:
***************
*** 812,816 ****
  ================================================
  
! 1. Neuron Digest
  ++++++++++++++++
  
--- 836,848 ----
  ================================================
  
! 1. Backpropagator's Review
! ++++++++++++++++++++++++++
! 
!    One of the best introductory sources is Donald Tveter's World-Wide-Web
!    page at http://www.mcs.com/~drt/bprefs.html, which contains both answers
!    to additional FAQs and an annotated neural net bibliography emphasizing
!    on-line articles. 
! 
! 2. Neuron Digest
  ++++++++++++++++
  
***************
*** 821,825 ****
     find the messages in that newsgroup in the form of digests. 
  
! 2. Usenet groups comp.ai.neural-nets (Oha!) and
  +++++++++++++++++++++++++++++++++++++++++++++++
     comp.theory.self-org-sys.
--- 853,857 ----
     find the messages in that newsgroup in the form of digests. 
  
! 3. Usenet groups comp.ai.neural-nets (Oha!) and
  +++++++++++++++++++++++++++++++++++++++++++++++
     comp.theory.self-org-sys.
***************
*** 832,836 ****
     day, please tell me then). 
  
! 3. Central Neural System Electronic Bulletin Board
  ++++++++++++++++++++++++++++++++++++++++++++++++++
  
--- 864,868 ----
     day, please tell me then). 
  
! 4. Central Neural System Electronic Bulletin Board
  ++++++++++++++++++++++++++++++++++++++++++++++++++
  
***************
*** 845,849 ****
     EchoMail compatible bulletin board systems. 
  
! 4. Neural ftp archive site ftp.funet.fi
  +++++++++++++++++++++++++++++++++++++++
  
--- 877,881 ----
     EchoMail compatible bulletin board systems. 
  
! 5. Neural ftp archive site ftp.funet.fi
  +++++++++++++++++++++++++++++++++++++++
  
***************
*** 855,859 ****
     at fastest. Contact: neural-adm@ftp.funet.fi 
  
! 5. USENET newsgroup comp.org.issnnet
  ++++++++++++++++++++++++++++++++++++
  
--- 887,891 ----
     at fastest. Contact: neural-adm@ftp.funet.fi 
  
! 6. USENET newsgroup comp.org.issnnet
  ++++++++++++++++++++++++++++++++++++
  
***************
*** 862,866 ****
     activities. 
  
! 6. AI CD-ROM
  ++++++++++++
  
--- 894,898 ----
     activities. 
  
! 7. AI CD-ROM
  ++++++++++++
  
***************
*** 887,891 ****
     details) 
  
! 7. NN events server
  +++++++++++++++++++
  
--- 919,923 ----
     details) 
  
! 8. NN events server
  +++++++++++++++++++
  
***************
*** 895,899 ****
     ftp://ftp.idiap.ch/html/NN-events/, 
  
! 8. World Wide Web
  +++++++++++++++++
  
--- 927,931 ----
     ftp://ftp.idiap.ch/html/NN-events/, 
  
! 9. World Wide Web
  +++++++++++++++++
  
***************
*** 911,916 ****
     Many others are available too; WWW is changing all the time. 
  
! 9. Neurosciences Internet Resource Guide
! ++++++++++++++++++++++++++++++++++++++++
  
     This document aims to be a guide to existing, free, Internet-accessible
--- 943,948 ----
     Many others are available too; WWW is changing all the time. 
  
! 10. Neurosciences Internet Resource Guide
! +++++++++++++++++++++++++++++++++++++++++
  
     This document aims to be a guide to existing, free, Internet-accessible
***************
*** 923,927 ****
     http://http2.sils.umich.edu/Public/nirg/nirg1.html. 
  
! 10. Academic programs list
  ++++++++++++++++++++++++++
  
--- 955,959 ----
     http://http2.sils.umich.edu/Public/nirg/nirg1.html. 
  
! 11. Academic programs list
  ++++++++++++++++++++++++++
  
***************
*** 933,937 ****
     Links to neurosci, psychology, linguistics lists are also provided. 
  
! 11. INTCON mailing list
  +++++++++++++++++++++++
  
--- 965,969 ----
     Links to neurosci, psychology, linguistics lists are also provided. 
  
! 12. INTCON mailing list
  +++++++++++++++++++++++
  

==> nn4.changes.body <==
*** nn4.oldbody	Thu Dec 28 18:24:25 1995
--- nn4.body	Tue Jan 30 13:27:11 1996
***************
*** 1,6 ****
  
  Archive-name: ai-faq/neural-nets/part4
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ4
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
--- 1,6 ----
  
  Archive-name: ai-faq/neural-nets/part4
! Last-modified: 1996-01-06
! URL: ftp://ftp.sas.com/pub/neural/FAQ4.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  

==> nn5.changes.body <==
*** nn5.oldbody	Thu Dec 28 18:24:29 1995
--- nn5.body	Tue Jan 30 13:27:19 1996
***************
*** 1,6 ****
  
  Archive-name: ai-faq/neural-nets/part5
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ5
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
--- 1,6 ----
  
  Archive-name: ai-faq/neural-nets/part5
! Last-modified: 1996-01-17
! URL: ftp://ftp.sas.com/pub/neural/FAQ5.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
***************
*** 60,63 ****
--- 60,66 ----
  ===========
  
+ Note for future submissions: Please restrict yourself to 60 lines length.
+ Please send a HTML-formatted version if at all possible. 
+ 
  The following simulators are described below: 
  
***************
*** 73,80 ****
  10. SNNS 
  11. Aspirin/MIGRAINES 
! 12. Adaptive Logic Network kit 
  13. NeuralShell 
  14. PDP++ 
! 15. Xerion 
  16. Neocognitron simulator 
  17. Multi-Module Neural Computing Environment (MUME) 
--- 76,83 ----
  10. SNNS 
  11. Aspirin/MIGRAINES 
! 12. Adaptive Logic Network Educational Kit 
  13. NeuralShell 
  14. PDP++ 
! 15. Uts (Xerion, the sequel) 
  16. Neocognitron simulator 
  17. Multi-Module Neural Computing Environment (MUME) 
***************
*** 234,248 ****
     MB). 
  
! 12. Adaptive Logic Network kit
! ++++++++++++++++++++++++++++++
  
     This package differs from the traditional nets in that it uses logic
!    functions rather than floating point; for many tasks, ALN's can show many
!    orders of magnitude gain in training and performance speed. Anonymous ftp
!    from menaik.cs.ualberta.ca [129.128.4.241] in directory /pub/atree. See
!    the files README (7 KB), atree2.tar.Z (145 kb, Unix source code and
!    examples), atree2.ps.Z (76 kb, documentation), a27exe.exe (412 kb,
!    MS-Windows 3.x executable), atre27.exe (572 kb, MS-Windows 3.x source
!    code). 
  
  13. NeuralShell
--- 237,252 ----
     MB). 
  
! 12. Adaptive Logic Network Educational Kit (for Windows)
! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  
     This package differs from the traditional nets in that it uses logic
!    functions AND and OR in all hidden layers but the first, which uses
!    simple perceptrons. This representation of functions from real valued
!    inputs to real outputs allows the user to impose constraints on the
!    learned solution (monotonicity, convexity,...). Execution software is
!    provided in C source form for experimenters. Anonymous ftp from
!    ftp.cs.ualberta.ca in directory /pub/atree/atree3/. See files 
!    atree3ek.exe and atree3ek.brief.guide, This software is the same as the
!    commercial Atree 3.0 program for functions of one or two inputs. 
  
  13. NeuralShell
***************
*** 279,294 ****
     on-line at the above address. 
  
! 15. Xerion
! ++++++++++
  
!    Xerion runs on SGI and Sun machines and uses X Windows for graphics. The
!    software contains modules that implement Back Propagation, Recurrent Back
!    Propagation, Boltzmann Machine, Mean Field Theory, Free Energy
!    Manipulation, Hard and Soft Competitive Learning, and Kohonen Networks.
!    Sample networks built for each of the modules are also included. Contact:
!    xerion@ai.toronto.edu. Xerion is available via anonymous ftp from
!    ftp.cs.toronto.edu [128.100.1.105] in directory /pub/xerion as 
!    xerion-3.1.ps.Z (153 kB) and xerion-3.1.tar.Z (1.3 MB) plus several
!    concrete simulators built with xerion (about 40 kB each). 
  
  16. Neocognitron simulator
--- 283,298 ----
     on-line at the above address. 
  
! 15. Uts (Xerion, the sequel)
! ++++++++++++++++++++++++++++
  
!    Uts is a portable artificial neural network simulator written on top of
!    the Tool Control Language (Tcl) and the Tk UI toolkit. As result, the
!    user interface is readily modifiable and it is possible to simultaneously
!    use the graphical user interface and visualization tools and use scripts
!    written in Tcl. Uts itself implements only the connectionist paradigm of
!    linked units in Tcl and the basic elements of the graphical user
!    interface. To make a ready-to-use package, there exist modules which use
!    Uts to do back-propagation (tkbp) and mixed em gaussian optimization
!    (tkmxm). Uts is available in ftp.cs.toronto.edu in directory /pub/xerion.
  
  16. Neocognitron simulator
***************
*** 592,616 ****
     ftp://ftp.cica.indiana.edu/pub/pc/win3/programr/ainet100.zip or from 
     ftp://oak.oakland.edu/SimTel/win3/math/ainet100.zip 
- 
- 32. TDL - Trans-Dimensional Learning
- ++++++++++++++++++++++++++++++++++++
- 
-     FTP
-    to: oak.oakland.edu Directory: SimTel/win3/neurlnet File: tdl10.zip
-    (Trans-Dimensional Learning) WWW: http://www.acs.oakland.edu/oak.html
-    in win3/neurlnet 
- 
-    TDL v1.0 allows users to perform pattern recognition by utilizing
-    software that allows for fast, automatic construction of Neural Networks,
-    mostly alleviating the need for parameter tuning. Evolutionary processes
-    combined with semi-weighted networks (hybrid cross between standard
-    weighted neurons and weightless n-level threshold units) generally yield
-    very compact networks (i.e., reduced connections and hidden units). By
-    supporting multi-shot learning over standard one-shot learning, multiple
-    data sets (characterized by varying input and output dimensions) can be
-    learned incrementally, resulting in a single coherent network. This can
-    also lead to significant improvements in predictive accuracy
-    (Trans-dimensional generalization). Graphical support and several data
-    files are also provided. 
  
  For some of these simulators there are user mailing lists. Get the packages
--- 596,599 ----

==> nn6.changes.body <==
*** nn6.oldbody	Thu Dec 28 18:24:32 1995
--- nn6.body	Tue Jan 30 13:27:24 1996
***************
*** 1,6 ****
  
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ6
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
--- 1,6 ----
  
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-01-06
! URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
***************
*** 61,65 ****
  
  Note for future submissions: Please restrict yourself to 60 lines length and
! avoid markting hype. Please send a HTML-formatted version if at all
  possible. 
  
--- 61,65 ----
  
  Note for future submissions: Please restrict yourself to 60 lines length and
! avoid marketing hype. Please send a HTML-formatted version if at all
  possible. 
  
***************
*** 68,72 ****
  1. nn/xnn 
  2. BrainMaker 
! 3. SAS Software/ Neural Net add-on 
  4. NeuralWorks 
  5. MATLAB Neural Network Toolbox 
--- 68,72 ----
  1. nn/xnn 
  2. BrainMaker 
! 3. SAS Macros for Neural Networks 
  4. NeuralWorks 
  5. MATLAB Neural Network Toolbox 
***************
*** 89,92 ****
--- 89,94 ----
  22. NeuroGenetic Optimizer (NGO) 
  23. WAND 
+ 24. Atree 3.0 Adaptive Logic Network 
+ 25. TDL v. 1.1 (Trans-Dimensional Learning) 
  
  1. nn/xnn
***************
*** 213,218 ****
       Introduction to Neural Networks 324 pp book
  
! 3. SAS Software/ Neural Net add-on
! ++++++++++++++++++++++++++++++++++
  
            Name: SAS Software
--- 215,220 ----
       Introduction to Neural Networks 324 pp book
  
! 3. SAS Macros for Neural Networks
! +++++++++++++++++++++++++++++++++
  
            Name: SAS Software
***************
*** 219,238 ****
         Company: SAS Institute, Inc.
         Address: SAS Campus Drive, Cary, NC 27513, USA
!      Phone,Fax: (919) 677-8000
           Email: saswss@unx.sas.com (Neural net inquiries only)
             URL: ftp://ftp.sas.com/pub/neural/README
  
!     Basic capabilities:
!       Feedforward nets with numerous training methods
!       and loss functions, plus statistical analogs of
!       counterpropagation and various unsupervised
!       architectures
!     Operating system: Lots
!     System requirements: Lots
!     Uses XMS or EMS for large models(PCs only): Runs under Windows, OS/2
!     Approx. price: Free neural net software, but you have to license
!                    SAS/Base software and preferably the SAS/OR, SAS/ETS,
!                    and/or SAS/STAT products.
!     Comments: Oriented toward data analysis and statistical applications
  
  4. NeuralWorks
--- 221,254 ----
         Company: SAS Institute, Inc.
         Address: SAS Campus Drive, Cary, NC 27513, USA
!          Phone: (919) 677-8000
           Email: saswss@unx.sas.com (Neural net inquiries only)
             URL: ftp://ftp.sas.com/pub/neural/README
+    Operating system: Lots
+    System requirements: Lots
  
!    Several SAS macros for feedforward neural nets are available
!    for releases 6.08 and later. For a list of macros and
!    articles relating to neural networks, see ftp://ftp.sas.com/pub/neural/README.
!    The macros are free but won't do you any good unless you
!    have licensed the required SAS products. If you want
!    information about licensing SAS products, call 919 677-8000
!    and ask for Software Sales.
! 
!    TNN is an elaborate system of macros for feedforward neural
!    nets including a variety of built-in activation and error
!    functions, multiple hidden layers, direct input-output
!    connections, missing value handling, categorical variables,
!    standardization of inputs and targets, and multiple
!    preliminary optimizations from random initial values to
!    avoid local minima.  TNN requires the SAS/OR product in
!    release 6.08 or later.  Release 6.10 or later is strongly
!    recommended. Release 6.10 is required for the plotting
!    macros to use SAS/INSIGHT.
! 
!    NETIML is a collection of SAS/IML modules and macros for
!    training and running multilayer perceptrons with a variety
!    of activation and error functions. NETIML requires the
!    SAS/IML product in release 6.08 or later.
! 
  
  4. NeuralWorks
***************
*** 1041,1046 ****
     To contact Novel Technical Solutions email: <neural@nts.sonnet.co.uk>. 
  
  ------------------------------------------------------------------------
  
! Next part is part 7 (of 7). Previous part is part 5. 
  
--- 1057,1185 ----
     To contact Novel Technical Solutions email: <neural@nts.sonnet.co.uk>. 
  
+ 24. Atree 3.0 Adaptive Logic Network Development System (for
+ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+    Windows)
+    ++++++++
+ 
+    Contact:
+    Dendronic Decisions Limited
+    3624 - 108 Street
+    Edmonton, Alberta
+    Canada T6J 1B4
+ 
+    tel/fax +1 (403) 438-8285
+    or email William W. Armstrong, President (arms@cs.ualberta.ca)
+    or use the Dendronic forum on CompuServe (GO DENDRONIC)
+ 
+    Atree 3.0 trains feedforward networks having perceptrons in
+    the first hidden layer and logic gates AND and OR in other hidden
+    layers.  Functions from real inputs to a real output can be represented.
+ 
+    Users can specify constraints on monotonicity, derivatives (slopes)
+    and convexity of functions being learned.  Such expert knowledge
+    can be used to ensure the result of training satisfies requirements
+    of known physical or economic laws.  Functions can be inverted without
+    additional training, a capability useful in control applications.
+ 
+    The execution library, which computes learned functions at high
+    speed, is offered in source form (code suitable for Windows and
+    Unix is available free of charge).  Atree 3.0 outputs ALN decision
+    trees in human-readable form (for checking) as well as in binary form
+    (for fast reloading).  The commercial license allows redistribution
+    and modification of execution code.
+ 
+    Atree 3.0 may be used for data analysis, prediction, pattern recognition
+    and for real-time control applications that must run on a typical computer
+    (such as a PC).  Scripts can be run automatically and can be called from
+    macros in Microsoft Excel and MS Access or from other applications.
+    Many samples showing how to use Atree 3.0 are provided.
+ 
+    The open architecture of the execution library is important
+    when outputs have to be checked for conformity to a specification.
+    The user is entirely responsible for making his/her applications
+    safe to use, of course, but the openness of everything concerned with
+    execution of the results of Atree 3.0 training supports that goal.
+ 
+    A manual of approximately 100 pages will be supplied.
+ 
+    Introductory price until March 31,1996: $99 US (or $125 Canadian
+    for Canadian residents only -- price includes GST).  Sending a
+    bank draft or money order is recommended.  Personal or corporate
+    cheques drawn on a US bank (or on Canadian bank, in Canada) are
+    acceptable.  Credit card orders are not accepted at this time.
+    Please make cheques payable to Dendronic Decisions Limited.
+ 
+    The software can be tried out using the Atree 3.0
+    Educational Kit, available via anonymous ftp from ftp.cs.ualberta.ca
+    in directory /pub/atree/atree3/. See files atree3ek.exe
+    and atree3ek.brief.guide.
+    The Educational Kit is restricted to learning functions with one or two
+    inputs. A built-in 2D and 3D plotting capability is useful to help
+    the user understand how ALNs work.
+ 
+ 25. TDL v. 1.1 (Trans-Dimensional Learning)
+ +++++++++++++++++++++++++++++++++++++++++++
+ 
+    Platform: Windows 3.*
+    Company: Universal Problem Solvers, Inc.
+    WWW-Site (UPSO): http://pages.prodigy.com/FL/lizard/index.html
+    or FTP-Site (FREE Demo only): ftp.coast.net, in Directory:
+    SimTel/win3/neurlnet, File: tdl11-1.zip and tdl11-2.zip
+    Cost of complete program: US$20 + (US$3 Shipping and Handling).
+ 
+    The purpose of TDL is to provide users of neural networks with a specific
+    platform to conduct pattern recognition tasks.  The system allows for the fast
+    creation of automatically constructed neural networks. There is no need to resort
+    to manually creating neural networks and twiddling with learning parameters.
+    TDL's Wizard can help you optimize pattern recognition accuracy. Besides
+    allowing the application user to automatically construct neural network for a given
+    pattern recognition task, the system supports trans-dimensional learning.  Simply put,
+    this allows one to learn various tasks within a single network, which otherwise differ
+    in the  number of input stimuli and output responses utilized for describing them.
+    With TDL it is possible to incrementally learn various pattern recognition tasks within
+    a single coherent neural network structure.  Furthermore, TDL supports the use of
+    semi-weighted neural networks, which represent a hybrid cross between standard
+    weighted neural networks and weightless multi-level threshold units. Combining both
+    can result in extremely compact network structures (i.e., reduction in connections and
+    hidden units), and improve predictive accuracy on yet unseen patterns. Of course the
+    user has the option to create networks which only use standard weighted neurons.
+ 
+    System Highlights:
+    (1) The user is in control of TDL's memory system (can decide how many examples
+    and neurons are allocated ; no more limitations, except for your computers memory).
+    (2)TDLs Wizard supports hassle-free development of  neural networks, the goal of
+    course being optimization of predictive accuracy on unseen patterns.
+    (3) History option allows users to capture their favorite keystrokes and save them.
+    Easy recall for future use.
+    (4) Provides symbolic interface which allows the user to create:Input and output
+    definition files, Pattern files, and Help files for objects (i.e., inputs, input values,
+    and outputs).
+    (5) Supports categorization of inputs.  This allows the user to readily access inputs
+    via a popup menu within the main TDL menu.  The hierarchical structure of the
+    popup menu is under the full control of the application developer (i. e., user).
+    (6) Symbolic object manipulation tool: Allows the user to interactively design the
+    input/output structure of an application.  The user can create, delete, or modify
+    inputs, outputs,  input values, and categories.
+    (7) Supports Rule representation: (a) Extends standard Boolean operators
+    (i.e., and, or, not) to contain several quantifiers (i.e., atmost, atleast, exactly,
+    between).  (b) Provides mechanisms for rule revision (i.e., refinement) and
+    extraction. (c) Allows partial rule recognition. Supported are first- and best-fit.
+    (8) Allows co-evolution of different subpopulations (based on type of transfer
+    function chosen for each subpopulation).
+    (9) Provides three types of crossover operators: simple random, weighted and blocked.
+    (10) Supports both one-shot as well as multi-shot learning.  Multi-shot learning
+    allows  for the incremental acquisition of different data sets.  A single expert
+    network is constructed, capable of recognizing all the data sets supplied during
+    learning. Quick context switching between different domains is possible.
+    (11) Three types of local learning rules are included: perceptron, delta and fastprop.
+    (12) Implements 7 types of unit transfer functions: simple threshold, sigmoid,
+    sigmoid-squash, n-level threshold, new n-level-threshold, gaussian and linear.
+    (13) Over a dozen statistics are collected during various batch training sessions.
+    These can be viewed using the chart option.
+    (14) A 140+ page hypertext on-line help menu is available.
+    (15) A DEMONSTRATION of TDL can be invoked when initially starting the program.
+ 
  ------------------------------------------------------------------------
  
! Next part is part 7 (of 7). Previous part is part 5. @
  

==> nn7.changes.body <==
*** nn7.oldbody	Thu Dec 28 18:24:35 1995
--- nn7.body	Tue Jan 30 13:27:30 1996
***************
*** 1,6 ****
  
  Archive-name: ai-faq/neural-nets/part7
! Last-modified: 1995/12/28
! URL: ftp://ftp.sas.com/pub/neural/FAQ7
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
--- 1,6 ----
  
  Archive-name: ai-faq/neural-nets/part7
! Last-modified: 1996-01-06
! URL: ftp://ftp.sas.com/pub/neural/FAQ7.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
  
***************
*** 503,506 ****
  
  Neural network FAQ / Warren S. Sarle, saswss@unx.sas.com
- @
  
--- 503,505 ----
-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
