Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!lerc.nasa.gov!magnus.acs.ohio-state.edu!math.ohio-state.edu!uwm.edu!homer.alpha.net!news.mathworks.com!usenet.eel.ufl.edu!col.hp.com!sony!nntp-sc.barrnet.net!news.fujitsu.com!amdahl.com!amd!amd.com!txnews.amd.com!news
From: bridgwtr@vanzandt.amd.com
Subject: Re: RFD: My Learning/Thinking Neural Network
Message-ID: <DD4qnp.895@txnews.amd.com>
Sender: news@txnews.amd.com
Nntp-Posting-Host: corgi
Organization: n/a
X-Newsreader: <WinQVT/Net v3.9>
Date: Fri, 11 Aug 1995 04:47:48 GMT
Lines: 77

In a message dated Wed, 9 Aug 1995 03:55:33, Ken Seergobin 
<ken@psych.utoronto.ca> dicusses:

Ken>Okay, you put out the bait and this is me biting...
Ken>

Thank you for the discussion!

Ken>Can you please be a little more explicit in describing
Ken>your NN?

Love to, just not (previously) sure what to say

Ken>	1.  Can your model be accurately described as
Ken>	    a Neural Network - a set of interconnected
Ken>	    units/neurons that accumulate knowledge
Ken>	    through a learning/adaptive process?

I've been looking for a good, proper, definition of NN: based on yours, I'd say 
it definately is.  Some may argue that neurons are only biological; but that is 
a  matter of definition.

Ken>	2.  I think you have hinted at more complex
Ken>	    units/neurons - is this the case?

The main unit in the paradigm is the node (a neuron, with all the supporting 
features integrated into it; that is the connection information, weights, i/o, 
etc).  The nodes contain significantly more information than any others we'd 
seen, except we carefully avoided making them senselessly complex, so they are 
efficient and seem to handle all the concerns that a neuron should.

Ken>	3.  Is this in the realm of supervised or
Ken>	    unsupervised learning?  Or, is this somesort
Ken>	    of modular networking that relies on many
Ken>	    learning alogrithms.

We wanted it to be able to learn from direct input (like a student learning his 
times tables) and be able to learn from example or from usage.  So to use ours, 
you would need to provide training sets (this output should be given for this 
input).  However, as exposed to more and more data, it will adapt in many ways 
to be able to deal with data like that better (ie more acuratly and more 
encompassing).

Ken>	4.  What type of equipment has this been implemented
Ken>	    on - supercomputer, a network of relatively quick
Ken>	    PCs....?

The implementation work has been on a PC, and not that quick.  To grow it to 
the point of "thinking", it would definitly need many execution clocks - so 
either give it time or run on a faster system.  No doubt a K5 with a fast & 
large HD should be able to produce extremelly exiting results.  I forget how 
many intructions cycle I estimated a node to require; maybe 10000 for all 
processing.

Ken>	5.  Let's see - No, it's not April 1.

No, very serious :)

Ken>	6.  Again, how 'bout filling in the detail?  

Love to (not anything proprietary of course)...just not sure what to say.

Ken>	7.  Is your code available or is this something you
Ken>	    plan to patent.

Definatly want to patent it.....but we are always interesting in negotiations.

Ken>You realize that noone will believe you unless you say more.

Perhaps at this point we don't care as much about anybody believing anything.  
Really, we just wanted to do it and we did.  And now, I just want to discusse 
it (and related items of interest) and complete the implementation so it can be 
demonstrated.

Thanks again for your discussion.
Best Regards,
Joseph
