Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.kei.com!ub!acsu.buffalo.edu!chan
From: chan@acsu.buffalo.edu (Lipchen Alex Chan)
Subject: Training/Test Break in Recurrent Cascaded Correlation NN
Message-ID: <Cxxs6y.HxD@acsu.buffalo.edu>
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: solaris.acsu.buffalo.edu
Organization: UB
Date: Wed, 19 Oct 1994 20:13:46 GMT
Lines: 27

I have two questions on the functions of Training/Test Break in the 
Recurrent Cascaded Correlation NN proposed by Scott Fahlman. In the 
morse.c given as an example in rcc1.c, it is written:
     * During training there is an
     * additional input called TrainingBreaks which clears the network
     * state between letters.  After training the strobe output could be
     * used to clear the network state (This is NOT implemented).

1) What are the REAL reasons to have training/test break in the network?
   From the rcc1.c, I can see that these breaks are used to prevent the
   effects of the single recurrent link from summation processes of slopes
   and activation. But they don't really reset anything, say clear all the
   previous values of hidden units, current outputs etc. So what "network
   state" is actually cleared in this case?

2) How can the TestBreak be replaced by strobe outputs after the network
   is trained? Use the feedbacks from the strobe output as an extra input
   to the network and call this input as the testbreak for next pattern? 

Thanks in advance for any comments or advise.

Alex
-- 
  +----------------------------------------------------------------+
  | Name     :   Lipchen Alex Chan                                 |
  | E-mail   :   chan@eng.buffalo.edu or chan@acsu.buffalo.edu     |
  +----------------------------------------------------------------+
