Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news4.ner.bbnplanet.net!news3.near.net!paperboy.wellfleet.com!news-feed-1.peachnet.edu!usenet.eel.ufl.edu!news.mathworks.com!newsfeed.internetmci.com!news.sprintlink.net!howland.reston.ans.net!Germany.EU.net!zib-berlin.de!uni-duisburg.de!news.Rhein-Ruhr.De!nostromo!krok
From: krok@nostromo.rhein-ruhr.de (Matthias Krok)
Subject: Some thoughts about NNs...
X-Newsreader: TIN [UNIX 1.3 941216BETA PL0]
Organization: Home, sweet home
Message-ID: <DBvK65.371@nostromo.rhein-ruhr.de>
Date: Mon, 17 Jul 1995 19:15:40 GMT
Lines: 92

Hi !

At first a little introduction: I'm very interested in neural-networks at
all, read many books in the last years, but at this time I haven't yet tried
to program one myself. I'm also not very familiar with the details of the
learning-mechanism (back-propagation, etc...) I only know, that "teaching a
NN" means to put something on the input-neurons, and compare the output with
the wanted output, and then adjust the weight (or measure ? english is not
my native language...) of the neuronal connections. But I know about the
abstract sheme, the three levels of neurons, the history of NNs, and the
qualities, advatages and disadvantages of them. Even if my knowledge about
their possibilities maybe a bit outdated. But this is exactly why I'm
writing now.

I've some thoughts about modifications to neural networks, that may be
interesting. At first, I thought, if two neuron-levels are too less (like
two people - can't remind their names at the moment - proved for the
perceptrons some decades ago - and the middle level in a
three-level-neuron-network usually sombylizes some... easier symbols, than
the output-neurons. I mean, for example, in a NN that was teached for
pattern-recognition, the middle-level-neurons sombylize simple lines or
edges, in particular angles, etc...

From this fact, the conclusion is near, that a neural-network should be able
to "understand" more complex correlations, if there are more levels ! it is
obvious, that you - theoretically - could connect two NNs, where the first
one could read text from a video-eye, and the second could make - lets say -
waether forcasts. if you connect them right, you would have a machine, that
could make weather forcasts based upon a digitized picture of a sheet of
paper with the weather-informations on it. very simplified, but I'm trying
to make clear: this network would have 5 levels, and it is able to
understand correlations, that would be too complex for a 3-level-network.
My main question is: Has anybody done some research about this ? Maybe my
informations are really outdated, and the neural-networks, that are today in
use are all of more than 3 levels. or is it impossible to teach networks
with more than three levels ? when yes, why ? I just want to know, if
anybody has already (!) thought about this.

My second thought is not such close to neural-networks themselves: I thought
about the main-difference between an animal brain and a neural network.
Beside the difference in complexity, the obvious difference is, that a NN
(or I should say ANN for artifical NN) just interprets the input, produces
output, and then its work is done. like a command-line-interpreter (shell)
on a computer. it may be super-powerful, but you tell it what to do (or to
analyse) it does it, and then it ends. if you want more, you have to provide
new inputs, start it again, and you get new output. When I think of a neural
network that should control - lets say - a little autonome robot, with some
sensors, that seems to be impossible. you could teach it to make some
movement when being in particular situations, based on the sensoric input,
but this thing won't have the slightest idea of memory, or won't be able to
learn... (I guess, that in this robot, you would start your neural network
every tenth of a second, or something...).

there must be more. not only, that a NNN (natural NN) doesn't need to be
started every tenth of a second (that's because its
massive-paralell-processing, but that's not _such_ a difference in
quality), it has certain aims. the brain of an animal is not only a
connection of some neurons in some levels, there a mechanisms that provide
INSTINCT and LUST !!! And these two things (that are very similar to
eachother) would need our little robot. when I say lust, I mean a priciple
that would allow it to JUDGE certain situations. let's say, our robot hay
solar-cells on its head. then his accumulator is rechanrged when he's in the
sun. that's good for the robot, but how should he know ? One cannot
implement declarative knowledge like "when your energy-cell is empty, you
should go into the sun" in a ANN. and in an NNN this knowledge is realized
by lust: you're feeling great lust when you're eating (except, when you're
not hungry). even greater lust do you feel when making sex, because this is
the final aim of every beeing (biologically seen). the instict is similar:
when you're hungry, your instict let you search something to eat. even if
you wouldn't know (in your declarative memory) that eating stops beeing
hungry. An ANN cannot know it. so it would need the instict to do it. this
lust and instinct would have to be some mechanisms behind - in the
background - of the ANN. it wouldn't know "why" it feels good now, but
anytime, it would realize the connection between the sun and its energy-cell
(if it is complex enough) and it would intentionally go into the sun.

if you could lead the lust from outside (by radio-control) you would be able
to let the robot do what you want. you would condition it !!!

And now the same question as above: Does anybody know of some research, or
even of some thoughts about this topic - a AI-machine, based on an artifical
neural network, with instincts and lust, so that it could be conditioned ???


Please mail a copy of your answer to me. (to be sure that I won't miss it)

thanx in advance.


-- 
in diesem Sinne... Matthias			 	  PGP-Key available !
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
