Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!godot.cc.duq.edu!news.duke.edu!news.mathworks.com!newsfeed.internetmci.com!news2.cais.net!news.cais.net!van-bc!unixg.ubc.ca!news!cln.etc.bc.ca!chenness
From: chenness@cln.etc.bc.ca (CRAIG HENNESSEY)
Subject: Help on Polar to Cartesion Neural Network.
X-Nntp-Posting-Host: cln.etc.bc.ca
Message-ID: <1996May2.033245.16970@news.etc.bc.ca>
Originator: chenness@cln
Sender: news@news.etc.bc.ca (System Administration)
Reply-To: chenness@cln.etc.bc.ca (CRAIG HENNESSEY)
Organization: The Education Technology Centre of British Columbia. (Canada)
Date: Thu, 2 May 1996 03:32:45 GMT
Lines: 76


Hello all, I was wondering if I could get some help in solving a
Polar Co-ordinate to Cartestian Co-ord. learing Neural Network.

I have a book and it gave me some pseudo code, I wrote a program
and it works lovely, returning two nice negative numbers of which
I can make no sense. I'll post the code at the end of this message.

My question is "Where the heck does the learning part come in?"

1) I send the program two inputs, X, and Y.
2) The program weights each one - multiplies by the weights and sends to
	each neuron
3) Each neuron does a arc tan calculation or something, weighs it and
	sends it to the output neurons

(heres what i don't get) 

4) I compute the true R and THETA for the given X, and Y values.
5) I compare some number the network outputed and the above calculated
6) Find the error difference
7) Apply back propagation using the error values.

Now as far as I can tell, the first few iterations, the network just
diddles around with the input, then when I give it the real answer
it compares the answer with what it came up with, and starts to
compute more accurate diddles. At some point around 50k iterations
it comes up with a value that is + or - 1.5% of the correct value.

What's the point? At this point I no longer need to compute the
definitely correct answer? But in that case, why not just use the
computed answer (by this i mean the x = rcos(theta) or whatever).

Perhaps this is just an exersise, having little to do with the NN's
abilites?

Allright, any information is apppreciated.

Please reply to my e-mail at chenness@cln.etc.bc.ca

Thanks,

Craig Hennessey.

(heres the pseudo code - I got it from 'The New Turing Omnibus' by
A.K. Dewdney, for those who wish to find it (it's a cool book))

input input(1),input(2)
for i = 1 to n
	medin(i) = 0
	for j = 1 to 3
		medin(i) = medin(i) + synone(j,i)*input(j)
	medout(i) = hyperbolic tangent(medin(i))

for i = 1 to 2
	output(i) = 0
	for j = 1 to n
		output(i) = output(i) + syntwo(j,i)*medout(j)
	error(i) = target(i) - output(i)

for i = 1 to 2
	for j = 1 to n
		syntwo(j,i) = syntwo(j,i) + rate*medout(j)*error(i)

for i = 1 to n
	sigma(i) = 0
	for j = 1 to 2
		sigma(i)=sigma(i) + error(j)*syntwo(i,j)
	sigmoid(i) = 1 - (medin(i))^2

for i = 1 to 2
	for j = 1 to n
		delta=rate*sigmoid(j)*sigma(j)*input(i)
		synone(1,j)=synone(i,j)+delta


