PADO
Real Signal Understanding through Algorithm Evolution

The Objective

The goal of the PADO work is to take a small set of real world signals (e.g., video images), learn to distinquish between the described classes, and produce as output, a set of programs that, together, can successful distinguish between new examples from these small classes.

Click Here For Image Samples from an Example Domain

In general, the process of evolving programs to perform any form of learning can be summarized as shown here:

More specifically, here is an outline of the main loop of PADO's learning process. PADO can learn to distinguish between signals of any type, but the "Signals" refered to below could easily be (and often are) images taken from the real world.

PADO stands for Parallel Algorithm Discovery and Orchestration

In the orchestration phase of the PADO learning system, many programs (all of which have been learned through the evolutionary process) are combined in order to extract higher level information about the signals they have learned to examine.

Neural Programming and Internal Reinforcement

Gradient-descent backpropagation in Artificial Neural Networks (ANNs) is an appealing learning method, as it gives ANNs a clear, locally optimal update procedure. Genetic programming (GP) is another successful learning technique. GP provides powerful parameterized primitive constructs and evolution as a search mechanism. Unlike ANNs, though, GP does not have such a principled procedure for changing parts of a learned structure based on that structure's past performance. In our work we have introduced ``Neural Programming'', a connectionist representation for evolving parameterized programs. Neural Programming allows for the generation of credit and blame assignment in the process of learning programs. We have further introduced ``Internal Reinforcement'' as a general informed feedback mechanism for Neural Programming. We have also presented the Internal Reinforcement process and demonstrated its increased learning rate through illustrative experiments.

Here is an overview of how Neural Programming programs are evolved and how Internal Reinforcement fits into that picture:

Here is an example of a fragment from a Neural Programming program. This fragment "foviates" by repeatedly focusing its attention to find part of a video image that minimizes the pixel variance in that region.

Now that we have this neural programming representation, we are able to create a mechanism to accomplish internal reinforcement. In Internal Reinforcement of Neural Programs (IRNP), there are two main stages. The first stage is to classify each node and arc of a program with its perceived contribution to the program's output. This set of labels is collectively referred to as the Credit-Blame map for that program. The second stage is to use this Credit-Blame map to change that program in ways that are likely to improve its performance.

An Example Experiment

In the following experiment, real video images (stills) of seven different every day objects were used to train up the PADO learning system.

Click Here For Image Samples from this Experimental Domain

Then this learned system was tested on other video stills (that it had never seen before) of the same objects.

Here are the results with and without IRNP. In this first graph we see that IRNP works much better with IRNP than without IRNP. We also see that on this particular domain (in which 14.2% is all that can be obtained by randomly guessing a class on an image to be classified) PADO, using an orchestration strategy called Search-Weight, obtains a generalization performance of 65% by generation 35 and is continuing to improve.

Here are more results with and without IRNP in the same domain. In this second graph we see that IRNP still works better with IRNP than without IRNP. We also see that using this orchestration strategy, called Nearest-Neighbor, PADO learns to generalize with a performance of 75% correct by generation 45 and is continuing to improve.