Newsgroups: comp.ai.neural-nets
From: David@longley.demon.co.uk (David Longley)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!demon!news2.demon.co.uk!news.demon.co.uk!longley.demon.co.uk!David
Subject: Re: bias
References: <1995Apr28.195623.7533@cm.cf.ac.uk>
Organization: Relational Technology
Reply-To: David@longley.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.29
Lines: 48
X-Posting-Host: longley.demon.co.uk
Date: Tue, 2 May 1995 12:40:50 +0000
Message-ID: <799418450snz@longley.demon.co.uk>
Sender: usenet@demon.co.uk

In article <1995Apr28.195623.7533@cm.cf.ac.uk>
           C.M.Sully@cm.cf.ac.uk "Chris Sully" writes:

> Let's see if I can get this right ...
> 
> I've trained a backpropagation ANN to predict the birthweight of babies given
> 11 predictor variables. I started with 8400ish records in the training set but
> performance was approx. as good with 1000 so I reduced to this, using a
>  validation set of 500 records to determine optimum performance on unseen data
>  and a test set of 500 records to judge performance.
> 
> The statisticians who provided the data preferred to see the results in terms
> of residuals (actual outputs-predicted outputs). They expected that the 
> mean of the residuals would be zero. It wasn't, being around 10-20 grammes
> (typical birthweights being around 4000 grammes if I recall correctly).
> 
> It seems there is a degree of bias in the model, with the model consisting
> of two parts, the data and the neural network.
> 
> If this is a bias introduced by the network would someone explain how this has
> come about/ introduce me to a few appropriate references/ suggest pathways
> of further investigation.
> 
> Comments on the above would also be most welcome.
> 
> Thanks in adavnce.
> 
> Chris.

Well, as I have tried to elaborate elsewhere (Fragments of Behaviour 1 - 9) I
think something of an almighty mistake has been committed in this new look at
Artifical Neural Networks. Whilst they may be a good model of how we &  other
biological systems make sense of data, I am arguing that this is  *not*  what
we should ideally be modelling if we are interested in *intelligent systems*.
Psychology may go some way to describing the Information Processing skills  &
deficits of humans and other animals, but this is likely to turn out to be  a
catalogue of biases and heuristic strategies which are in the end irrational,
and *not* subject to formal, extensional analysis (just weight/phase spacing).
In this way we will model the ability to make approximations or hunches . BUT
also BIASES. Surely we  should be using standard statistical technology which 
we CAN formally understand, and which we know is based on formal mathematical
and logical principles.

Drs. Minsky & Papert...where are you in 1995? Yes the opacity problem is real,
but surely we  want to  built *reliable*  AI  systems not #human# AI systems?
(a synopsis of 'Fragments of Behaviour').
-- 
David Longley
