Wed Sep 28, 12:00, WeH 1327
Topic: Kd-trees for efficient regression
Kan Deng*
Kernel Regression is a well-known memory-based technique for function
approximation. It works by storing all {input, output} training
patterns in memory. To predict an output for a new input pattern, it
computes a weighted average of some or all of the outputs in the
memory, where the weights are a function of the distances from the
inputs in memory to the query.
The drawback of Kernel Regression is that enumerating these weights
can be expensive when the size of the memory is large. In this talk,
we demonstrate that the efficiency of Kernel Regression can be greatly
improved by the use of kd-trees. There have previously been several
attempts to use kd-trees to speed up Kernel Regression and nearest
neighbor methods. Our algorithm is new and I will explain its
advantages in the talk.
In addition to regression, the framework of kd-trees holds potential
for many other applications. We will discuss these at the end of the
talk.
* joint work with Andrew Moore