Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!agate!doc.ic.ac.uk!cc.ic.ac.uk!atae
From: atae@spva.ph.ic.ac.uk (Ata Etemadi)
Subject: Re: stock prediction :(
Message-ID: <1994Oct11.160253.15582@cc.ic.ac.uk>
Nntp-Posting-Host: icmag1.sp.ph
Reply-To: atae@spva.ph.ic.ac.uk
Organization: Imperial College of Science, Technology, and Medicine, London, England
References: <CwyKp7.66q@watdragon.uwaterloo.ca>  <gradyCxFKwt.LJ4@netcom.com>
Date: Tue, 11 Oct 94 16:02:53 BST
Lines: 29

In article <gradyCxFKwt.LJ4@netcom.com>, grady@netcom.com (Grady Ward) writes:
|> Ata Etemadi (atae@spva.ph.ic.ac.uk) wrote:
|> : but it leaves much to be desired. First of all, you have to do better 
|> : than random and also be sure that a plain linear (or higher order) 
|> : extrapolation is not just as good. Most importantly however, other than 
|> 
|> Of course neural nets will often offer models equivalent to some linear or
|> higher order polynomial obtained from simple factor analysis. And why
|> not?  Nets oughts to be able to solve problems that have simple models too.
|> 
|> -- 
|> Grady Ward       |  For information and free samples on | "Look!" 
|> grady@netcom.com |  royalty-free Moby natural language  |  -- Madame Sosostris
|> +1 707 826 7715  |  lexicons (largest in the world),    |     A91F2740531E6801
|> (voice/24hr FAX) |  run:        finger grady@netcom.com |     5B117D084B916B27

The difference is you don't have to train a linear extrapolation routine
for many CPU hours.

	adios
		Ata <(|)>.
-- 
         Dr Ata Etemadi, Blackett Laboratory,
         Space and Atmospheric Physics Group,
         Imperial College of Science, Technology, and Medicine,
         Prince Consort Road, London SW7 2BZ, ENGLAND
Internet/Arpanet/Earn/Bitnet: atae@spva.ph.ic.ac.uk
Span                        :  SPVA::atae
UUCP/Usenet                 :  atae%spva.ph.ic@nsfnet-relay.ac.uk
