# Color Constancy Using KL-Divergence

Reference Information:

Charles Rosenberg, Martial Hebert, Sebastian Thrun, "Image Color Constancy Using KL-Divergence", poster appeared as a poster in ICCV 2001.

Abstract. Full paper in Postscript format (770K) and in PDF format (590K).

Figure 1:

 (a) (b) (c) (d) (e)

• Plot (a) is a graphic representation of the log color space, log (R/B) is on the vertical axis and log (G/B) is on the horizontal axis, the minimum value for both axes is in the upper left corner.
• Plot (b) is a graphic representation of probability distribution of canonical colors, brighter regions are regions of higher probability, the overall brightness has been boosted to show structure.
• Plot (c) is the observed color distribution for the book image under the MB-5000 illuminant shown in Figure 2.
• Plot (d) is the likelihood based illuminant posterior.
• Plot (e) is the equivalent plot for KL-divergence.

Figure 2:

 Uncorrected Best Fit Likelihood KL-Divergence

An example of an image where the KL algorithm works particularly well is the "book" image, captured here under the MB-5000 illuminant. The "Uncorrected" image is the raw uncorrected image. The other images are renderings of how the image would look if corrected given the illumination parameters estimated by the specific algorithm. The "Best Fit" image is the best possible rendition, given the ground truth and a diagonal illumination model. Here the likelihood algorithm achieved an uncompensated error of 0.2756 and the KL algorithm achieved an error of 0.0312.

Note that this effect was observed over all of the images of the "book" object under all illuminants in the test set.

Ball 1 Image:

 Uncorrected Best Fit Likelihood KL-Divergence

There was also a single image of one specific object in our test set in which the KL algorithm did much worse than the likelihood algorithm, it achieved an uncompensated error of 0.2144 versus 0.1033 for the likelihood based algorithm. In this image of the "ball1" object, one of the colors had a very low measured intensity under halogen illumination and the KL algorithm aligned neutral gray with one of the primary colors in the image. It should be noted that the mean uncompensated error over all illuminants was 0.1431 versus 0.1394, both algorithms performing similarly on this object.

Note that this effect was only observed for this specific image of the "ball1" object under halogen illumination.

Test Images:

These images used for the evaluation in this paper were collected by the Computational Vision Lab at Simon Fraser University.

All 110 of these images were used for evaluation. Note that the images sets are named "model" and "test" to follow the naming convention by SFU. In this work those distinctions were not utilized and all of the images in the "model" and "test" sets were used as test images for the algorithm.

Here is a local mirror of the images in PPM format:

Here are the cropped version of these images that were used for the evaluation in this paper:

Notes about the cropped image set:

• The images named "*obj.ppm" are the images of the objects that were used for the evaluation.
• The images named "*illum.ppm" are the images of the objects that were used for the illuminant estimation.
• Unfortunately, when I started this work, I hand cropped these images, so there is no simple formula to map from the original images to these cropped versions.

The following file contains both the ground truth estimated from these files and the results for the KL algorithm on these images:

Training Images:

Following are some examples of approximately 2300 images used to train the cannonical color distribution.