Newsgroups: uk.finance,misc.invest.futures,comp.ai.genetic,comp.ai.fuzzy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!EU.net!chsun!itf1!mpeppler
From: mpeppler@itf.ch (Michael Peppler)
Subject: Re: Financial Neural Networks Home Page
Message-ID: <D5sr4D.CAq@itf.ch>
Organization: ITF Management SA, Geneva, Switzerland
References: <hacker-0403951257010001@dcc00171.slip.digex.net> <3jeqju$ovq@sndsu1.sedalia.sinet.slb.com> <3jfd8n$raa@deadmin.ucsd.edu> <D5ALvu.Fp5@nbn.com>
Date: Tue, 21 Mar 1995 15:31:23 GMT
Lines: 29
Xref: glinda.oz.cs.cmu.edu comp.ai.genetic:5375 comp.ai.fuzzy:4268

In article <D5ALvu.Fp5@nbn.com>, Ron Macken  <rmacken@calon.com> wrote:
>> >> ...  Time and time again it
>> >> has been demonstrated that optimizing on past data leads to disaster
>> >> when applied to the present tense using real money....
>
>Much has been made of the dangers of optimizing on past data.
>I'm working on a genetic programming system that is developed on
>several sets of data and then is tested on different sets of data.
>
>A model would only be considered successful if it worked well on
>all of the test sets of data.
>
>While this still carries risks as a predictor of the future,
>it seems that the risks are greatly reduced. Am I missing something?


This is fine if you find a good solution immediately. If you don't,
then after couple of iterations (optimize on data set 'A', test on data
set 'B') the 'B' data set tends to 'polute' the 'A' set, and you start
to use knowledge of the 'B' set implicitly.

This is not so good...

Michael
-- 
Michael Peppler, ITF Management SA, Fontaine 13, CH-1204 Geneva
mpeppler@itf.ch - Tel (4122) 312 1311 - Fax (4122) 312 1325
"A successful [software] tool is one that was used to do something
undreamed of by its author."   -- S. C. Johnson
