Machine Learning

Deng/Lee/Maron/Moore/Schneider

Locally weighted Learning (also known as memory-based learning, instance-based learning, lazy-learning, and closely related to kernel density estimation, similarity searching and case-based reasoning).

Locally weighted learning is simple but appealing, both intuitively and statistically. And it has been around since the turn of the century. When you want to predict what is going to happen in the future, you simply reach into a database of all your previous experiences, grab some similar experiences, combine them (perhaps by a weighted average that weights more similar experiences more strongly) and use the combination to make a prediction, do a regression, or many other more sophisticated operations. We like this approach to learning, especially for learning process dynamics or robot dynamics, because it is very flexible (low bias) so provided we have plenty of data we will eventually get an accurate model.

A paper that overviews locally weighted learning, discusses many approaches to defining similarity and statistical interpretations.

Simple easy to use locally weighted regression implementation

GMBLMAIN and GGMBLMAIN: Efficient locally weighted learning packages.

Vizier: a commercial locally weighted learning system

Locally weighted learning applied to control of processes and dynamic systems.

We have also developed new non-linear controls algorithms for exploiting instance-based models that can predict in real-time. We are interested in high accuracy in very fine-grained and non-linear control tasks such as robot billiards (better than 0.1% accuracy needed) that linearized control methods would be unable to attain.

Our instance-based learning systems have also been applied to robot juggling and many industrial dynamics problems such as "how are viscosity, moisture, temperature, and two dozen other variables related during the stages of my cooking process?”

A paper that overviews how locally weighted learning can be applied to process control.

Autonomous parameter tuning and model selection with cross-validation

Aggressive model selection is important to our stated goal of autonomous statistics, and we have pushed it hard. This has led to practical cascaded approaches to cross-validation in which searches for good models occur over increasingly large "onion-peel"-style spaces of algorithms. Within each layer, models are selected by leave-one-out cross-validation. Which is the best layer? That is judged by an outer level of cross-validation on further independent test-sets.

We have pushed model selection further with "racing" algorithms that evaluate many models in parallel and use blocked comparisons to, in the early stages, prune away unpromising models that, with very high confidence, will not eventually evaluate as the best.

A early paper that overviews the practical issues of intense cross-validation (new research, algorithms and analysis due shortly...)

GMBLMAIN, GGMBLMAIN and Vizier: software systems that tune themselves with intense cross-validation.

A short paper on "Racing": the use of blocking techniques to accelerate intense model selection.

A longer journal paper survey on Racing approaches.

Auton Research Index

Efficient Data Mining

Locally weighted Learning

Auton: The Big Picture

Active Learning and Exploration

Decision and Reinforcement Learning

Auton Home Page