Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!bloom-beacon.mit.edu!uhog.mit.edu!news.kei.com!ub!acsu.buffalo.edu!jn
From: jn@cs.Buffalo.EDU (Jai Natarajan)
Subject: Re: K-Means distribution
Message-ID: <CzF8FB.7M0@acsu.buffalo.edu>
Originator: jn@hadar.cs.Buffalo.EDU
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: hadar.cs.buffalo.edu
Organization: State University of New York at Buffalo/Computer Science
References:  <kerog-1711941413540001@sectrl>
Date: Thu, 17 Nov 1994 16:57:10 GMT
Lines: 18


K-Means :

Lets say you have n nodes or centres
Initialise them to random values

Loop :  Assign each training set sample to the centre closest to it (say in terms
of Euclidean distance)
         After assigning all samples recompute each center as the mean of samples
         which are clustered at that centre
         Repeat the Loop until the cluster allocations don't change in
consecutive iterations. Those centres are now your weights at the n nodes

Best of luck

Jai Natarajan
Dept. of Computer Science
SUNY at Buffalo
