Newsgroups: comp.ai.neural-nets
From: jimmy@ecowar.demon.co.uk (Jimmy Shadbolt)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!panix!zip.eecs.umich.edu!newsxfer.itd.umich.edu!gatech!swrinde!pipex!demon!ecowar.demon.co.uk!jimmy
Subject: Re: Measuring generalization 
Distribution: world
References: <370lit$bgi@nz12.rz.uni-karlsruhe.de>
Organization: Econostat
Reply-To: jimmy@ecowar.demon.co.uk
X-Newsreader: Simple NEWS 1.90 (ka9q DIS 1.21)
Lines: 23
Date: Fri, 21 Oct 1994 14:40:17 +0000
Message-ID: <782750417snz@ecowar.demon.co.uk>
Sender: usenet@demon.co.uk

In article <370lit$bgi@nz12.rz.uni-karlsruhe.de> jbr@aifbpirat.aifb.uni-karlsruhe.de writes:

>When trying to compare the generalization capabilities of
>several nets, you might want to look at the following reference:
>Lendaris, G. G.
>"A Proposal for Indicating Quality of Generalization
>when Evaluation ANNS"
>Int. Joint Conf. on NN
>San Diego, 1990
>IEEE Catalog Number 90CH2879-5
>
>One notion missing in that article is that when comparing
>the percentage of correct answers for a given set,
>one should also indicate what exactly is considered to be
>"correct" (e.g. differs from desired output by no more
>than 0.1).
>

That's right - is there other ways to assess the generalisation properties
for non-categorical data apart from regularisation theory?

Drago

