Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!pipex!uunet!usc!nic-nac.CSU.net!charnel.ecst.csuchico.edu!csusac!csus.edu!netcom.com!grady
From: grady@netcom.com (Grady Ward)
Subject: Re: stock prediction :(
Message-ID: <gradyCxFKwt.LJ4@netcom.com>
Organization: +1 707 826 7715
X-Newsreader: TIN [version 1.2 PL1]
References: <CwyKp7.66q@watdragon.uwaterloo.ca> <36i6sb$8j0@newsbf01.news.aol.com> <1994Oct9.002814.5003@cc.ic.ac.uk>
Date: Mon, 10 Oct 1994 00:19:40 GMT
Lines: 14

Ata Etemadi (atae@spva.ph.ic.ac.uk) wrote:
: but it leaves much to be desired. First of all, you have to do better 
: than random and also be sure that a plain linear (or higher order) 
: extrapolation is not just as good. Most importantly however, other than 

Of course neural nets will often offer models equivalent to some linear or
higher order polynomial obtained from simple factor analysis. And why
not?  Nets oughts to be able to solve problems that have simple models too.

-- 
Grady Ward       |  For information and free samples on | "Look!" 
grady@netcom.com |  royalty-free Moby natural language  |  -- Madame Sosostris
+1 707 826 7715  |  lexicons (largest in the world),    |     A91F2740531E6801
(voice/24hr FAX) |  run:        finger grady@netcom.com |     5B117D084B916B27
