Compression Research


Introduction

This page contains information and links which are small a sampling of compression research that has been performed using the Lena image.

Links to other people's research which has utilized the Lena image are welcome. If you would like me to add a link to your page, please send e-mail to: chuck@cs.cmu.edu


Research Links

Fiji Art Renditions of Lenna - more info

Color Image Quantization

Fractal Image Compression

Investigations of Image Compression Using Multisplines

Non-Uniform Sampling and Interpolation for Lossy Image Compression

Vector Quantization

Wavelets and Filterbanks


Non-Uniform Sampling and Interpolation for Lossy Image Compression

A set of experiments was performed with a lossy image compression algorithm that utilizes non-uniform sampling and interpolation (NSI) of the image intensity surface. The goal of this work was to create a lossy compression algorithm which was asymmetrical, having a low decompression complexity and a potentially higher compression complexity. The algorithm non-uniformly samples the image data in two dimensions. The number of samples chosen, and hence the compression ratio, is based on a supplied error metric threshold and local image features. The technique uses a greedy sample point selection algorithm and then returns to the original sample point decisions and jitters them for a better fit. Decompression consists of a linear interpolation between sample points.

The original (uncompressed) Lena image.

The Lena image compressed to 1.0 bit per pixel using NSI.

A scaled plot of the difference between the original image data and the 1.0 bpp NSI compressed Lena image. Magnitude zero errors are represented by neutral gray.

The white pixels indicate the position of sample points taken by NSI to compress the Lena image to 1.0 bpp.

A set of experiments found this algorithm to be, on average, 2.5 dB worse in terms of PSNR than an image compressed by JPEG for the same bit rate. Decompression was significantly faster, 10x to 20x than a DCT based algorithm, however compression was two to three times slower.


A plot of the intensity data of line 266 of the original (uncompressed) Lena image.


A plot of the intensity data of line 266 of the 1.0 bpp NSI compressed Lena image.

Some investigation was also made into utilizing the compressed data storage format to aid in other common processing tasks. An enhanced scaling algorithm was devised. This algorithm utilized the proximity of the sample points chosen to each other to detect edges. This allowed bi-linear interpolation to be used to scale relatively smooth areas and a form of pixel replication to be used for edge pixels, resulting in improved scaled image quality.

The location of the sample points found by the compression alogrithm are used to detect image edges. This information can be used to maintain edge fidelity when scaling image data up and results in improved fidelity over standard algorithms.

For more detailed information, please refer to these publications:

Walter Bender and Charles Rosenberg, "Image Enhancement Using Non-Uniform Sampling", SPIE Image Handling and Reproduction Systems Integration, vol. 1460, pp. 59-70, February 1991.

Charles Rosenberg, "A Lossy Compression Algorithm Based on Nonuniform Sampling and Interpolation of the Image Intensity Surface", SID International Symposium Digest of Technical Papers, vol. 21, pp. 388-391, September 1990.


Return [Bunnies] Home

Last Update: 9/2/97