Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.psc.edu!hudson.lm.com!newsfeed.pitt.edu!gatech!news.sprintlink.net!EU.net!sun4nl!cs.vu.nl!wdwitte
From: wdwitte@cs.vu.nl (Witte de W)
Subject: representation issue
Nntp-Posting-Host: galei.cs.vu.nl
Sender: news@cs.vu.nl
Organization: Fac. Wiskunde & Informatica, VU, Amsterdam
Date: Sun, 23 Jul 1995 11:26:22 GMT
X-Newsreader: TIN [version 1.2 PL2]
Message-ID: <DC62Fy.L32.0.-s@cs.vu.nl>
Lines: 35

I have a representation problem for a backpropagation network.
The output is one continuous variable. The input are series of 
0 and 1's. The point is that there are less 1's than 0's. 

In order to improve learning and trainspeed i thought to switch the
representation, so the 0 gets 1 and the 1 gets 0. The point is that
the cases with the (original) 0 as input almost equals to the mean 
of the output, whereas the cases with a 1 has a complete other mean.
The table gives an example.
			mean	number
	all cases 	 8.0	100
	0 		 7.8	 90
	1		10.0	 10

So if there is only an activiation of the input (i.e. the input is 1) with
almost the same mean of the whole sample, would it not be hard to let it
learn the 1's.

I know the bias unit is supposed to take care of this kind of things,
but if there are many input units with (far) less hidden, i still see
the problem.

Can someone help me out with this problem? Answers to my e-mail will be
summerised and posted to the net.

Thanks in advance,


Wiebe de Witte
(wdwitte@cs.vu.nl)
-- 
---
guns don't kill men, bullets do		
					- Sledge Hammer

