Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.mathworks.com!fu-berlin.de!zib-berlin.de!uni-duisburg.de!news.Rhein-Ruhr.De!nostromo!krok
From: krok@nostromo.rhein-ruhr.de (Matthias Krok)
Subject: Re: Some thoughts about NNs...
X-Newsreader: TIN [UNIX 1.3 941216BETA PL0]
Organization: Home, sweet home
Message-ID: <DC1Cuq.F8@nostromo.rhein-ruhr.de>
References: <DBvK65.371@nostromo.rhein-ruhr.de>
Date: Thu, 20 Jul 1995 22:23:14 GMT
Lines: 143

The following is my answer on a mail that wrote jmarkus@drmail.dr.att.com on
my article "Some thoughts about NNs...": 


From krok@nostromo.rhein-ruhr.de Fri Jul 21 00:20:57 1995
To: jmarkus@drmail.dr.att.com


Hi John !

On Wed, 19 Jul 1995 jmarkus@drmail.dr.att.com wrote:

> Your first question was basically can you have networks with more
> than one hidden layer.

Well... yes ;)

> Yes.  I have not done a lot of reading on
> these though I know it is possible.  Last semester, I attended a
> NN talk, and one of the people in the talk suggested that the speaker
> use a network with two hidden layers rather than one.  I believe he
> worked for some NN company, and they've found that two hidden layer
> networks seem to handle noisy data better than one hidden layer.
> Although he did claim that their observation was purly empiracle,
> and not based on any principle.

Somebody told me - as an answer to my article - that there are many 4 or 
5-layer networks, and that it can be mathematically proved, that a 
network with 3 hidden layers can do every possible decision. that has 
something to do with the space of solutions... a network with only one 
hidden layer can understand a 2-dimensional solution surface, a network 
with only input and output layers can only understand linear correlations 
(can't do XOR for example), while a network with 2 hidden layers can 
understand a 3d-solution-space, and one with 3 hidden layers can 
understand everything... but I'm not sure, if I understood this 
correctly, I've asked him about this...

> It is also possible to have an analog NN.  After all, all the nodes
> basically do is add the sum of the inputs.  This could be incorporated
> in an analog circuit.  Thus the output of the network could be based
> on real time of the input.  Thus, you could just leave the network
> running, and not restart it.  However, in most cases digital is much
> easier to work with, and in many cases, a tenth of a second is a small
> enough chunk of time that the output would seem to be analog in nature.
> It just depends on the application.  If you are checking the output
> of the network every minute or so, polling the network on 1/10 second
> would (I would guess) be good as analog signals.

well... the same german as above told me, that there are a) continously 
working NNs, that don't need to be restarted at an interval, but they 
work really continiously, and b) there are networks that don#t have 
different lerning mode and working mode, but they can really learn AND 
work in the same mode. like an animal. they have an additional 
"supervised lerning mode", but also a mode to learn AND work. I also 
cannot really imagine how such a thing could work, but I've asked him...

he also told me, that there are networks, that doesn't need to be learned 
with given outputs. he sais, that such a network just needs input, and 
without giving the wanted output, it can learn and work... I understand 
this not at all. but I've asked... ;)

> You then ask about INSTINCT and LUST.  I have seen these types of motivators
> used in robotics, but not really instilled in neural networks.  You could

Yes, and there`s a difference. see below.

> always use these in conjunction.  Say you use LUST as a motivator.  You
> could hardwire your robot to lust for the sunlight.  You could have
> two solar detectors, one in front of the robot, and one in back.  Then if
> the network gives a command to move the robot, you could adjust this movement
> by adding in a component that will try to drive the robot in the direction
> of the solar panel with the highest reading.  This way, the robot would
> 'instinctively' try to move toward the light, even though the network would

yes, this would be some kind of INSTINCT, but not LUST, as you sais 
above. there IS a difference.

> tell it otherwise.  Example.  The robot would want to head toward light.
> So the robot may instinctively try to move into a fire.  However, the 
> network may detect the fire and tell the robot to stay away from the flame.
> So even though fire is bad for the robot, it may instinctively try to 
> go toward it.

yes, instictively. that is right, it is possible to implement insticts in 
such a way, that the motoric part is controlled by some other functions 
than the NN itself. many insticts and reflexes work this way. when you 
put your finger into a candle flame, you'll pull it out immediatle 
without thinking about it. this is a similar "workaround" around your 
brain. and it's not a bad idea, BUT: What I meant with LUST, is something 
that has to be implemented in the NN (at least part of it). Because your 
"go to the light"-insticts works, but there is no way, to put some 
intelligence in this instict. yes, insticts have nothing to do with 
intelligence. right. but lust has. Lust influences your intelligence to 
think about something, to find a way, to do something, etc...

I'm not only talking about such simple insticts like "go to the light". 
imagine a really TASK, you want to implement as an instict. that may be 
"pick up the trash" or something like eating "recharge your batteries". 
I#d like to have an mechanism that makes the NN WANT to do something, and 
let's the NN think about it, to find a way to do it ! You have the 
instict to have sex. but this is not just as simple as going into the 
sun. this instict, no lust, influences your intelligence, to think about, 
how to flirt with a beautiful woman, lets write you poetry, and so on... 
this is lust, that makes you WANT something. and when your WANT 
something, you'll use your intelligence to reach it. by implementing such 
a workaround like "go to the light" above, you make it impossible that 
the NN realizes what is good and what is bad.

Ok, now your know my very anthropological (?) dreams... I know about the 
problems of these dreams: How to make a NN to WANT _anything_ ??? I don#t 
know. the problem is, that the task is not to make you NN to prefer some 
particular pattern of OUTPUT, but you have to make it to prefer some 
particular pattern of INPUT ! But to do THIS, you would need at first a 
NN, that is able to realize the correlation of its own output, and it input.

-> Question: Has anyone build such a NN ? A NN that first produces random 
output, to see the influence on its input and then tries to optimize the 
INPUT regarding some given aspects. If such a NN would be possible, the 
network would have its own WILL !!!

> I've also hear of some network type of learning scheemes that are like
> telling the network 'good dog' or 'bad dog' depending on the situation.
> You could incorporate your radio-control in this mannor to condition it.
> However, I cannot remember the exact method name, but I do know it 
> exists.

tell me more. any method to tell the network that anything was GOOD or 
BAD, I'm very interested in ! Please tell me more.

> Unfortunately I've not been involved in NNs lately, so I am not knowlegable
> of recent research.  It's also been about a year since my last class, so
> even though I'm knowlegable about networks, I don't know the references off
> the top of my head.  If nobody else can send you in the right direction,
> e-mail me back and I can look through some of my old stuff and see if I
> can scrounge up some references on what I'm talking about.  It's been
> a while, so I can't promise anything.

I'll post this mail back to the newsgroup, because there are some 
questions, that should better be asked there.


--
in diesem Sinne... Matthias                               PGP-Key available !
