Color Constancy Using KL-Divergence


Reference Information:

Charles Rosenberg, Martial Hebert, Sebastian Thrun, "Image Color Constancy Using KL-Divergence", poster appeared as a poster in ICCV 2001.

Abstract. Full paper in Postscript format (770K) and in PDF format (590K).

Click here for further information and more results.


Figure 1:

(a)

(b)

(c)

(d)

(e)


Figure 2:

Uncorrected

Best Fit

Likelihood

KL-Divergence

An example of an image where the KL algorithm works particularly well is the "book" image, captured here under the MB-5000 illuminant. The "Uncorrected" image is the raw uncorrected image. The other images are renderings of how the image would look if corrected given the illumination parameters estimated by the specific algorithm. The "Best Fit" image is the best possible rendition, given the ground truth and a diagonal illumination model. Here the likelihood algorithm achieved an uncompensated error of 0.2756 and the KL algorithm achieved an error of 0.0312.

Note that this effect was observed over all of the images of the "book" object under all illuminants in the test set.


Ball 1 Image:

Uncorrected

Best Fit

Likelihood

KL-Divergence

There was also a single image of one specific object in our test set in which the KL algorithm did much worse than the likelihood algorithm, it achieved an uncompensated error of 0.2144 versus 0.1033 for the likelihood based algorithm. In this image of the "ball1" object, one of the colors had a very low measured intensity under halogen illumination and the KL algorithm aligned neutral gray with one of the primary colors in the image. It should be noted that the mean uncompensated error over all illuminants was 0.1431 versus 0.1394, both algorithms performing similarly on this object.

Note that this effect was only observed for this specific image of the "ball1" object under halogen illumination.


Test Images:

These images used for the evaluation in this paper were collected by the Computational Vision Lab at Simon Fraser University.

All 110 of these images were used for evaluation. Note that the images sets are named "model" and "test" to follow the naming convention by SFU. In this work those distinctions were not utilized and all of the images in the "model" and "test" sets were used as test images for the algorithm.

Here is a local mirror of the images in PPM format:

Here are the cropped version of these images that were used for the evaluation in this paper:

Notes about the cropped image set:

The following file contains both the ground truth estimated from these files and the results for the KL algorithm on these images:


Training Images:

Following are some examples of approximately 2300 images used to train the cannonical color distribution.

For the complete set please contact: Charles Rosenberg.


Charles Rosenberg
Created: Wed Dec 29 21:28:11 EST 1999 -- Last modified: Thu Jan 31 13:50:05 EST 2002