Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!news.duq.edu!newsgate.duke.edu!news.mathworks.com!enews.sgi.com!www.nntp.primenet.com!nntp.primenet.com!arclight.uoregon.edu!usenet.eel.ufl.edu!warwick!lboro.ac.uk!usenet
From: Dave the Troll <eldb3@lboro.ac.uk>
Subject: Re: Overfit concept does not fit
Sender: usenet@lboro.ac.uk (Usenet-News)
Message-ID: <3239447F.63E6@lboro.ac.uk>
Date: Fri, 13 Sep 1996 11:24:47 GMT
X-Nntp-Posting-Host: elpc54200.lut.ac.uk
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=us-ascii
References: <322AE403.2781E494@taux01.nsc.com>
Mime-Version: 1.0
X-Mailer: Mozilla 2.01 (Win95; I)
Organization: Loughborough University, UK.
Lines: 26

Marcelo Krygier wrote:
> 
> You all know NNs can be overtrained/overfitted.
> It should be pointing to some basic problem with the NNs models
> we use. Our neurons, being the model everyone tries to simulate in
> ANNs, work BETTER when shown more examples. ANNs got their weights
> screwed up when overtrained.
> Can anybody explain to me how this fits the ANN model ?


ANNs work _better_ when shown more examples.
ANNs overfit when they are shown the _same_ examples many times.

Example of overtraining in humans-

A child I knew a few years ago was learning to read, his favourate book 
was a 'Thomas the Tank Engine' story which he had had read to him many 
many times.
As he was learning to read, he was reading to me (and doing very well 
for his age), I accidently turned two pages at once, he continued 
reading the page that had been missed out.....  One overtrained 
non-artificial neural network.

Dave Barnett
Optical Engineering Group
Loughborough University
