Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!portc01.blue.aol.com!portc02.blue.aol.com!newsfeed.pitt.edu!news.duq.edu!newsgate.duke.edu!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!hotellng.unx.sas.com!saswss
From: saswss@unx.sas.com (Warren Sarle)
Subject: changes to "comp.ai.neural-nets FAQ" -- monthly posting
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <nn.changes.posting_849240042@hotellng.unx.sas.com>
Supersedes: <nn.changes.posting_846561651@hotellng.unx.sas.com>
Date: Fri, 29 Nov 1996 04:00:43 GMT
Expires: Fri, 3 Jan 1997 04:00:42 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
Reply-To: saswss@unx.sas.com (Warren Sarle)
Organization: SAS Institute Inc., Cary, NC, USA
Keywords: modifications, new, additions, deletions
Followup-To: comp.ai.neural-nets
Lines: 346

==> nn1.changes.body <==
*** nn1.oldbody	Mon Oct 28 23:00:19 1996
--- nn1.body	Thu Nov 28 23:00:10 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-10-14
  URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part1
! Last-modified: 1996-11-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 51,56 ****
  To find the answer of question "x", search for the string "Subject: x"
  
- And now, in the end, we begin: 
- 
  ========== Questions ========== 
  ********************************
--- 51,54 ----
***************
*** 60,63 ****
--- 58,62 ----
     What is this newsgroup for? How shall it be used?
     Where is comp.ai.neural-nets archived?
+    May I copy this FAQ?
     What is a neural network (NN)?
     What can you do with an NN and what not?
***************
*** 80,83 ****
--- 79,83 ----
     Should I normalize/standardize/rescale the data?
     Should I nonlinearly transform the data?
+    How to measure importance of inputs?
     What is ART?
     What is PNN?
***************
*** 279,282 ****
--- 279,296 ----
  ------------------------------------------------------------------------
  
+ Subject: May I copy this FAQ?
+ =============================
+ 
+ The intent in providing a FAQ is to make the information freely available to
+ whomever needs it. You may copy all or part of the FAQ, but please be sure
+ to include a reference to the URL of the master copy,
+ ftp://ftp.sas.com/pub/neural/FAQ.html, and do not sell copies of the FAQ. If
+ you want to include information from the FAQ in your own web site, it is
+ better to include links to the master copy rather than to copy text from the
+ FAQ to your web pages, because various answers in the FAQ are updated at
+ unpredictable times. 
+ 
+ ------------------------------------------------------------------------
+ 
  Subject: What is a neural network (NN)?
  =======================================
***************
*** 620,623 ****
  ------------------------------------------------------------------------
  
! Next part is part 2 (of 7). @
  
--- 634,637 ----
  ------------------------------------------------------------------------
  
! Next part is part 2 (of 7). 
  

==> nn2.changes.body <==
*** nn2.oldbody	Mon Oct 28 23:00:26 1996
--- nn2.body	Thu Nov 28 23:00:17 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-10-05
  URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part2
! Last-modified: 1996-11-27
  URL: ftp://ftp.sas.com/pub/neural/FAQ2.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 26,29 ****
--- 26,30 ----
     Should I normalize/standardize/rescale the data?
     Should I nonlinearly transform the data?
+    How to measure importance of inputs?
     What is ART?
     What is PNN?
***************
*** 140,147 ****
  numerical analysis literature (Poljak 1964; Bertsekas 1995, 78-79). 
  
! Standard backprop can be used for on-line training (in which the weights are
! updated after processing each case) but it does not converge. To obtain
! convergence, the learning rate must be slowly reduced. This methodology is
! called stochastic approximation. 
  
  For batch processing, there is no reason to suffer through the slow
--- 141,153 ----
  numerical analysis literature (Poljak 1964; Bertsekas 1995, 78-79). 
  
! Standard backprop can be used for incremental (on-line) training (in which
! the weights are updated after processing each case) but it does not converge
! to a stationary point of the error surface. To obtain convergence, the
! learning rate must be slowly reduced. This methodology is called stochastic
! approximation. 
! 
! The convergence properties of standard backprop, stochastic approximation,
! and related methods, including both batch and incremental algorithms, are
! discussed clearly and thoroughly by Bertsekas and Tsitsiklis (1996). 
  
  For batch processing, there is no reason to suffer through the slow
***************
*** 164,167 ****
--- 170,176 ----
     Scientific, ISBN 1-886529-14-0. 
  
+    Bertsekas, D. P. and Tsitsiklis, J. N. (1996), Neuro-Dynamic
+    Programming, Belmont, MA: Athena Scientific, ISBN 1-886529-10-8. 
+ 
     Poljak, B.T. (1964), "Some methods of speeding up the convergence of
     iteration methods," Z. Vycisl. Mat. i Mat. Fiz., 4, 1-17. 
***************
*** 214,220 ****
  optimization have been studied for hundreds of years, and there is a huge
  literature on the subject in fields such as numerical analysis, operations
! research, and statistical computing, e.g., Bertsekas (1995), Gill, Murray,
! and Wright (1981). Masters (1995) has a good elementary discussion of
! conjugate gradient and Levenberg-Marquardt algorithms in the context of NNs.
  
  There is no single best method for nonlinear optimization. You need to
--- 223,230 ----
  optimization have been studied for hundreds of years, and there is a huge
  literature on the subject in fields such as numerical analysis, operations
! research, and statistical computing, e.g., Bertsekas (1995), Bertsekas and
! Tsitsiklis (1996), Gill, Murray, and Wright (1981). Masters (1995) has a
! good elementary discussion of conjugate gradient and Levenberg-Marquardt
! algorithms in the context of NNs. 
  
  There is no single best method for nonlinear optimization. You need to
***************
*** 262,265 ****
--- 272,278 ----
     Scientific, ISBN 1-886529-14-0. 
  
+    Bertsekas, D. P. and Tsitsiklis, J. N. (1996), Neuro-Dynamic
+    Programming, Belmont, MA: Athena Scientific, ISBN 1-886529-10-8. 
+ 
     Gill, P.E., Murray, W. and Wright, M.H. (1981) Practical Optimization,
     Academic Press: London. 
***************
*** 1567,1570 ****
--- 1580,1593 ----
  ------------------------------------------------------------------------
  
+ Subject: How to measure importance of inputs?
+ =============================================
+ 
+ The answer to this question us still in the process of being written. As of
+ 1996-11-27, the latest draft can be found via web browser at 
+ ftp://ftp.sas.com/pub/neural/importance.html, but this is a temporary URL,
+ subject to change at any time. 
+ 
+ ------------------------------------------------------------------------
+ 
  Subject: What is ART?
  =====================
***************
*** 1599,1614 ****
  ART has its own jargon. For example, data are called an arbitrary sequence
  of input patterns. The current training case is stored in short term memory
! and cluster seeds are long term memory. A cluster is a maximally compressed
! pattern recognition code. The two stages of finding the nearest seed to the
! input are performed by an Attentional Subsystem and an Orienting Subsystem,
! the latter of which performs hypothesis testing, which simply refers to the
! comparison with the vigilance threshhold, not to hypothesis testing in the
! statistical sense. Stable learning means that the algorithm converges. So
! the oft-repeated claim that ART algorithms are "capable of rapid stable
! learning of recognition codes in response to arbitrary sequences of input
! patterns" merely means that ART algorithms are clustering algorithms that
! converge; it does not mean, as one might naively assume, that the clusters
! are insensitive to the sequence in which the training patterns are
! presented--quite the opposite is true. 
  
  There are various supervised ART algorithms that are named with the suffix
--- 1622,1637 ----
  ART has its own jargon. For example, data are called an arbitrary sequence
  of input patterns. The current training case is stored in short term memory
! and cluster seeds are long term memory. A cluster is a maximally
! compressed pattern recognition code. The two stages of finding the nearest
! seed to the input are performed by an Attentional Subsystem and an 
! Orienting Subsystem, the latter of which performs hypothesis testing, which
! simply refers to the comparison with the vigilance threshhold, not to
! hypothesis testing in the statistical sense. Stable learning means that the
! algorithm converges. So the oft-repeated claim that ART algorithms are
! "capable of rapid stable learning of recognition codes in response to
! arbitrary sequences of input patterns" merely means that ART algorithms are
! clustering algorithms that converge; it does not mean, as one might naively
! assume, that the clusters are insensitive to the sequence in which the
! training patterns are presented--quite the opposite is true. 
  
  There are various supervised ART algorithms that are named with the suffix
***************
*** 1942,1948 ****
     have fuzzy outputs. 
   o The net can be interpretable as an adaptive fuzzy system. For example,
!    Gaussian RBF nets and B-spline regression models (Dierckx 1995) are fuzzy
!    systems with adaptive weights (Brown and Harris 1994) and can
!    legitimately be called neurofuzzy systems. 
   o The net can be a conventional NN architecture that operates on fuzzy
     numbers instead of real numbers (Lippe, Feuring and Mischke 1995). 
--- 1965,1971 ----
     have fuzzy outputs. 
   o The net can be interpretable as an adaptive fuzzy system. For example,
!    Gaussian RBF nets and B-spline regression models (Dierckx 1995, van
!    Rijckevorsal 1988) are fuzzy systems with adaptive weights (Brown and
!    Harris 1994) and can legitimately be called neurofuzzy systems. 
   o The net can be a conventional NN architecture that operates on fuzzy
     numbers instead of real numbers (Lippe, Feuring and Mischke 1995). 
***************
*** 2000,2003 ****
--- 2023,2030 ----
     Informatik, WWU Muenster, I-12, 
     http://wwwmath.uni-muenster.de/~feuring/WWW_literatur/bericht12_95.ps.gz 
+ 
+    van Rijckevorsal, J.L.A. (1988), "Fuzzy coding and B-splines," in van
+    Rijckevorsal, J.L.A., and de Leeuw, J., eds., Component and
+    Correspondence Analysis, Chichester: John Wiley & Sons, pp. 33-54. 
  
  ------------------------------------------------------------------------

==> nn3.changes.body <==
*** nn3.oldbody	Mon Oct 28 23:00:31 1996
--- nn3.body	Thu Nov 28 23:00:24 1996
***************
*** 1073,1076 ****
  ------------------------------------------------------------------------
  
! Next part is part 4 (of 7). Previous part is part 2. 
  
--- 1073,1076 ----
  ------------------------------------------------------------------------
  
! Next part is part 4 (of 7). Previous part is part 2. @
  

==> nn4.changes.body <==
*** nn4.oldbody	Mon Oct 28 23:00:36 1996
--- nn4.body	Thu Nov 28 23:00:28 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part4
! Last-modified: 1996-08-15
  URL: ftp://ftp.sas.com/pub/neural/FAQ4.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part4
! Last-modified: 1996-11-07
  URL: ftp://ftp.sas.com/pub/neural/FAQ4.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 1170,1177 ****
  ++++++++++++++++++++++++++++++++++++++++
  
!    Accessible via anonymous FTP on ftp.cs.cmu.edu [128.2.206.173] in
!    directory /afs/cs/project/connect/bench. In case of problems or if you
!    want to donate data, email contact is "neural-bench@cs.cmu.edu". The data
!    sets in this repository include the 'nettalk' data, 'two spirals',
     protein structure prediction, vowel recognition, sonar signal
     classification, and a few others. 
--- 1170,1177 ----
  ++++++++++++++++++++++++++++++++++++++++
  
!    Accessible WWW at http://www.boltz.cs.cmu.edu/ or via anonymous FTP at 
!    ftp://ftp.boltz.cs.cmu.edu/pub/neural-bench/. In case of problems or if
!    you want to donate data, email contact is "neural-bench@cs.cmu.edu". The
!    data sets in this repository include the 'nettalk' data, 'two spirals',
     protein structure prediction, vowel recognition, sonar signal
     classification, and a few others. 

==> nn5.changes.body <==
*** nn5.oldbody	Mon Oct 28 23:00:41 1996
--- nn5.body	Thu Nov 28 23:00:32 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part5
! Last-modified: 1996-10-21
  URL: ftp://ftp.sas.com/pub/neural/FAQ5.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part5
! Last-modified: 1996-11-07
  URL: ftp://ftp.sas.com/pub/neural/FAQ5.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 28,31 ****
--- 28,35 ----
  simulation?
  ===========
+ 
+ Since the FAQ maintainer works for a software company, he does not recommend
+ or evaluate software in the FAQ. The descriptions below are provided by the
+ developers or distributors of the software. 
  
  Note for future submissions: Please restrict software descriptions to a

==> nn6.changes.body <==
*** nn6.oldbody	Mon Oct 28 23:00:46 1996
--- nn6.body	Thu Nov 28 23:00:36 1996
***************
*** 1,4 ****
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-10-21
  URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
--- 1,4 ----
  Archive-name: ai-faq/neural-nets/part6
! Last-modified: 1996-11-07
  URL: ftp://ftp.sas.com/pub/neural/FAQ6.html
  Maintainer: saswss@unx.sas.com (Warren S. Sarle)
***************
*** 28,31 ****
--- 28,35 ----
  simulation?
  ===========
+ 
+ Since the FAQ maintainer works for a software company, he does not recommend
+ or evaluate software in the FAQ. The descriptions below are provided by the
+ developers or distributors of the software. 
  
  Note for future submissions: Please restrict product descriptions to a

==> nn7.changes.body <==
-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
 *** Do not send me unsolicited commercial or political email! ***

