Newsgroups: comp.ai.neural-nets
From: NKL@langf.demon.co.uk (Nick Langford)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!news.sprintlink.net!peernews.demon.co.uk!langf.demon.co.uk!NKL
Subject: Re: HELP!!! Kohonen algorithm needed
References: <1995Mar20.180648.11236@ludens>
Organization: Ant War Tales
Reply-To: NKL@langf.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.29
Lines: 91
X-Posting-Host: langf.demon.co.uk
Date: Tue, 28 Mar 1995 07:40:20 +0000
Message-ID: <796376420snz@langf.demon.co.uk>
Sender: usenet@demon.co.uk

In article <1995Mar20.180648.11236@ludens>
           benji@ludens.elte.hu "Budai Benjamin" writes:

"..."I need the basic learning algorithm of the Kohonen network. It seems to me
"..."that the algorithm I used is not correct: the network is unable to classify
"..."more than (number of neurons)/2 exemplars.
"..."If anybody has the correct algorithm please post it to me! (Pseudocode is
"..."enough, but Ada, C or C++ would be great.) 
"..."
"..."                                Benji
"..."
"..."                +---------------------------------------+
"..."                |                                       |
"..."----------------|  Budai Benjamin,  ELTE prog-mat II.   |---------------------
"..."                |                                       |
"..."                +---------------------------------------+
"..."

-- 
The Kohonen Learning Algorithm I assume you mean is for the Self 
Organising Memory.  In this case the network ( the last time I wrote 
one which was in 1992) consists of two layers, the input layer and the 
Kohonen Map (or the output layer).  

Each neuron in the input layer is connected to each neuron in the 
output layer.  When an input is presented to the input layer, the 
winning output layer is deemed to be the neuron whose weights align 
closest to the input.  This can be determined by calculating the 
euclidean distance between the input vector and each weight vector
for each neuron on the kohonen map.


When training starts, the random nature of the weights mean they could
wide range of inputs.  This will caulse a limited number of neurons in
the Kohonen layer to respond, always being closer to the input vectors 
than the rest.  To avoid this problem and allow the map to only 
represent the spread of the input data, a *conscience* factor is 
introduced.  This conscience monitors the number of times a neuron
has been chosen.  This number is used to calculate a frequency quotient(Fj),
which in turn is used to calculate a handicap, or offset value, (Bj) for
the euclideandistance.

	            1    
        Bj = G * ( ---  - Fj )
                    n

                        j   signifies the neuron on the kohonen map.
                        
                        G   is the frequency multiplier.  
                            
                        n   is the number of neurons in the Kohonen map

Once a neuron has been chosen it's handicap offset (Bj) is increased, and 
reduced for all other neurons.

The hanicap is altered by altering the Frequency quotient(Fj) ...

For the winning neuron...

      Fj = Fj + K * ( 1 - Fj)

For all other neurons

     Fj = Fj + K * ( 0 - Fj)

                            K is a multiplying factor used to aid training
                             

The winning neuron has it's weights altered to reduce the euclidean distance
closer to the input 

The difference between the each weight of the winning neuron and its
corresponding input value is multiplied by the winning training rate
and added to the existing weight.

A number of surrounding neurons on the kohonen map ( known as neighbours)
also heve their weights adjusted, using another trainng rate which is 
usually less than the winning rate.  This allows the influence of winning 
neurons to spread and 'self organise' over the whole 2-D kohonen map.

The number of neighbours, their rates and the geometry, in relation to 
the winning neuron may change over the training cycle.


Hope this helps in some way, though not pseudo code, the above description
did work, though experimentzation of the various quotients and training
rates will alter the training efficientcy.




Nick Langford
