Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!lerc.nasa.gov!magnus.acs.ohio-state.edu!math.ohio-state.edu!jussieu.fr!univ-lyon1.fr!swidir.switch.ch!newsfeed.ACO.net!Austria.EU.net!EU.net!sun4nl!cs.ruu.nl!tijger.fys.ruu.nl!ruunta!lodder
From: lodder@fys.ruu.nl (Arian Lodder)
Subject: Re: RFD: My Learning/Thinking Neural Network
X-Nntp-Posting-Host: ruuny5.fys.ruu.nl
Message-ID: <DD5FJI.5q5@fys.ruu.nl>
Sender: usenet@fys.ruu.nl (News system Tijgertje)
Organization: Physics Department, University of Utrecht, The Netherlands
References:  <DD4qnp.895@txnews.amd.com>
Date: Fri, 11 Aug 1995 13:45:18 GMT
Lines: 146

In article <DD4qnp.895@txnews.amd.com>, bridgwtr@vanzandt.amd.com writes:
> In a message dated Wed, 9 Aug 1995 03:55:33, Ken Seergobin 
> <ken@psych.utoronto.ca> dicusses:
> 
> Ken>Okay, you put out the bait and this is me biting...
> 

And me also (It's so tempting).

> Ken>Can you please be a little more explicit in describing your NN?
> 
> Love to, just not (previously) sure what to say

Start be giving some more detail about your definition of `neuron,'
the basic linear input-activation-output, or some higher order input,
strange activation/output functions. 

Than what is your interconnection scheme (is there a form of clustering,
recurrency, double links ?)

And what is your adaptive scheme, how do you adjust your network in time.

From previous (vague) posts i've summarized the following (correct me):
Created a new kind of neuron with extra functionality (?) which
learn with some `Constructive Learning Algorithm.'

If this is correct, you should have defined some kind of COST-function
describing when to add/delete a neuron/layer. So what kind of
cost-function are you using? 

> Ken>	1.  Can your model be accurately described as
> Ken>	    a Neural Network - a set of interconnected
> Ken>	    units/neurons that accumulate knowledge
> Ken>	    through a learning/adaptive process?
> 
> I've been looking for a good, proper, definition of NN: based on yours, 
> I'd say it definately is.  Some may argue that neurons are only biological; >
> > but that is a  matter of definition.

For now, I'm not interested in Definitions (if you can't give them), but we
all do speak english, so a more verbal description will satisfy (at
least my curiosity).
 
> Ken>	2.  I think you have hinted at more complex
> Ken>	    units/neurons - is this the case?
> 
> The main unit in the paradigm is the node (a neuron, with all the supporting 
> features integrated into it; that is the connection information, weights, i/o, 
> etc).  The nodes contain significantly more information than any others we'd 
> seen, except we carefully avoided making them senselessly complex, so they are 
> efficient and seem to handle all the concerns that a neuron should.

What are these `concerns?' You claim your neuron contains more information
(than what), does did mean that a neuron itself has a certain state,
memory? And are your outputs one-valued or not?
 
> Ken>	3.  Is this in the realm of supervised or
> Ken>	    unsupervised learning?  Or, is this somesort
> Ken>	    of modular networking that relies on many
> Ken>	    learning alogrithms.
> 
> We wanted it to be able to learn from direct input (like a student learning
> his times tables) and be able to learn from example or from usage.  So to use
> > ours, you would need to provide training sets (this output should be given
> for this input).  However, as exposed to more and more data, it will adapt 
> in many ways to be able to deal with data like that better (ie more acuratly
> and more encompassing).

That is supervised learning (input-output pairs), but with some 
on-line learning capability??

> Ken>	4.  What type of equipment has this been implemented
> Ken>	    on - supercomputer, a network of relatively quick
> Ken>	    PCs....?
> 
> The implementation work has been on a PC, and not that quick.  To grow it to 
> the point of "thinking", it would definitly need many execution clocks - so 
> either give it time or run on a faster system.  No doubt a K5 with a fast & 
> large HD should be able to produce extremelly exiting results.  I forget how 
> many intructions cycle I estimated a node to require; maybe 10000 for all 
> processing.

Ok lets make an example:

Your THINKING neural network contains 3000 neurons (mean), so one
output calculation will take (approx) 30,000,000 cycles.

Than you have do decide if your answer corresponds to your
desired output, if not add/delete neurons or layers!
will take some cycles to create/delete and adjust network.
but say it's much less than 30,000,000.

You have a training set (set to learn to think?) of about 10,000 examples
(just a few 10,000 short). To process this set only once will take
300,000,000,000 cycles.

Let's suppose it's only necessary to use the set 1000 times ...
and your already talking about 300 10^12 cycles.

You have a reasonable fast PC (say 100Mhz), so this will
take you 3,000.000 seconds (approx) one month!!
Just to train a small network !! 

(Please clarify some wrong assumption i've made) 

>
> Ken>	7.  Is your code available or is this something you
> Ken>	    plan to patent.
> 
> Definatly want to patent it.....but we are always interesting in
> negotiations.

With whom, about what. You have not said anything yet!
 
> Ken> You realize that noone will believe you unless you say more.
> 
> Perhaps at this point we don't care as much about anybody believing
> anything. Really, we just wanted to do it and we did. And now, I just
> want to discusse it (and related items of interest) and complete the
> implementation so it can be demonstrated.

Your NOT interested in anybody believing you, so wy bother posting
or discussing it. 

If this whole claim is true, it would be very interesting. But if
not, you're wasting your credibility and that of AMD (if this is
your company (and your mail adress isn't phony)).

We have several years experience in constructive learning algorithms
and know that what you are claiming is to good to be true.

Arian

--
_______________________________________________________________________

       _/_/   _/_/_/   _/_/_/   _/_/   _/    _/ A.W. Lodder
    _/    _/ _/    _/   _/   _/    _/ _/_/  _/ A.W.Lodder@fys.ruu.nl
   _/_/_/_/ _/_/_/_/   _/   _/_/_/_/ _/  _/_/ phone:
  _/    _/ _/  _/     _/   _/    _/ _/    _/       +31 (0)30 - 53 2955
 _/    _/ _/    _/ _/_/_/ _/    _/ _/    _/ fax  : +31 (0)30 - 53 7555

University of Utrecht, Department of Computer Topics in Physics 
Princetonplein 5 / P.O. Box 80000, 3508 TA Utrecht, The Netherlands
_______________________________________________________________________

