- 1 . Halftoning as an Inverse Problem

- 2 . Halftoning with Blue Noise

- 3 . The Gibbs Sampler

- 4 . Summary

The lowest level in the construction of the internal representation of V can be modeled by a Gaussian filter with standard deviation a, say Ga. That is, it is assumed that all further processing is based on a blurred version of the image. This blurring is a result of the finite size of visual receptors and early processing in the human visual system.

The standard deviation of the filter depends on the distance of the viewer from the image. Now, the halftoned image, seen from a distance, should resemble the original image as much as possible---even if the original image was seen closer up. So the Gaussian filter applied to the halftoned image is naturally larger than the Gaussian filter applied to the original image. In fact, here it will be assumed that the width of the Gaussian filter applied to the original image is 0, so that an H(i,j) must be found such that Ga{H(i,j)}=I(i,j).

This approach has been exploited frequently, sometimes unconsciously, in halftoning algorithms. It was explicitly used in the *blue noise* error diffusion algorithms of Ulichney (Ulichney, 1987) . Blue noise is noise in which the low-frequency components have been suppressed. Using an algorithm that generates binary images with blue noise to approximate the original image gives a high-quality halftoned image.

Ulichney's algorithm works by the common method of *error diffusion*. In error diffusion the processing order of the image is fixed (i.e., either raster or serpentine-raster order), and each pixel value is thresholded to determine whether the halftoned image should have the value 0 or 1. Then, the error between the halftoned value and the image value is added to image pixels yet to be processed.

These images have blue noise as an artifact of processing; the blue noise criterion is not explicitly derived. Computing the error value at a pixel and adding it to pixels yet to be processed tends to keep nearby pixels from having the same halftoned value. This increases the spatial frequency of the noise, making it more blue.

Error diffusion algorithms like Ulichney's give high quality halftoned images. But these algorithms have some problems:

- . The algorithm is
*opaque*. It is not shown how the weights influence the amount of blue noise in the result; it is not clear, for example, how to design a blue noise error diffusion algorithm that would be best for images viewed from a particular distance, even if the Gaussian filters used by the eye at that distance were precisely characterized.

School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 - . The algorithm cannot take into account
*special hardware characteristics*. For example, many monitors have the characteristic that the first pixel with the value 1 in a left to right run actually displays with a much smaller value than later pixels. Accounting for this and similar problems in the blue noise error diffusion algorithm is difficult, especially when a serpentine scan order is used.

- . The frequency of the blue noise is
*grayvalue dependent*. In a field consisting entirely of the value 0.99, the error diffusion algorithm will assign this field the halftoned image consisting of the value 1 approximately 99% of the time, and 0 1% of the time. The resulting image has low frequency components, and is perceived by the eye as mostly 1, and occasionally some value less than 1. The frequency of the blue noise is highest when the underlying field is near 0.5, and becomes arbitrarily low at the gray value approaches 0 or 1.

- . These resulting images are not
*spatially correct*. Consider a square of value a<0.5, surrounded by a field of 0. As the top row of the square is scanned, the first pixel encountered will*never*be turned on, because the accumulated value at that pixel will be less than 0.5. Depending on the value of a, the first pixel that is set to 1 in the halftoned image can be arbitrarily far into the square. The result is that the error diffusion algorithm tends to `round off' the upper corners of bright regions surrounded by dark regions.

- . Error diffusion algorithms also exhibit
*global response to a local event*. Consider scanning a field of value 0.5 that has a single pixel in the center of the field with value 1. A good error diffusion algorithm will produce a consistent checkerboard pattern of 0 and 1 pixels in response to the field. However, after the central pixel is scanned, the error diffusion algorithm will propagate a new error, which will disrupt the consistent checkerboard pattern. This disruption is visible from a large distance.

Find H(i,j) such that G{H(i,j) = I(i,j)

it clearly falls into the class of problems contemporary image processing research has characterized as* ill-posed* problems (Tikhonov and Goncharsky, 1987) . That is, one is given a known function that is applied to unknown data to produce a known result, and one wishes to find the unknown data. The problem is ill-posed (in this case) because there is not a unique halftoned image that produces the best possible representation of the original image; many images will do equally well. This kind of problem naturally suggests an optimization procedure: guess a possible answer, compute its transformation under the function, measure the distance from the known data, and then appropriately modify the guess and repeat.

Such an optimization procedure is available and has been applied extensively to image processing problems. It also naturally admits of a parallel implementation. The optimization procedure is known as the *Gibbs sampler *(Geman and Geman, 1984) .

The Gibbs sampler has been discussed extensively elsewhere. Only a brief description of its application to this problem is given here. Given a "temperature" T and an approximation to H called h, generate a new approximation by the following procedure:

- At each pixel (i,j) of h:

- . Calculate e=|I-G{h}| centered around pixel (i,j); also calculate e'=|I=G{h}| assuming h(i,j) is replaced by 1-h(i,j).

- . If e'<e replace h(i,j) by 1-h(i,j).

Otherwise, do this replacement with the probability exp(-e'/I).

The above procedure is iterated for various values of T, starting with large values and gradually decreasing, until a final stable image results (since as T is decreased fewer and fewer pixels will be replaced.)

The sequence of values of T is known as a "cooling schedule." In general, finding a good cooling schedule can be hard, but experience with halftoning shows that a good cooling schedule is easy to find. For example the sequence of values starting at 0.5 and halving works quite well on a wide range of images.

Updating only a fraction of the pixels in each pass limits the parallelism, but the number of processors that can be used in parallel is still enormous; for example, 64K processors for a 512*512 image and a 7*7 Gaussian filter. This makes this algorithm suitable for large SIMD processor arrays.

The resulting algorithm, when implemented on the Carnegie Mellon Warp machine for the 7*7 Gaussian filter with weights exp(-(|x|+|y|)), -3<=x,y<=3, normalized to sum to 1, runs in 1.1 s per pass or 4.4 s for a complete iteration. A good cooling schedule will make from roughly three to six passes over the halftoned image (the halftoned image is initialized with the results of a simple 8*8 ordered dither (Jarvis, Judice, and Ninke, 1976) ). The result is an algorithm that calculates the halftoned image in roughly 13-27 s.

Images generated by this algorithm and the Ulichney blue-noise dither (serpentine order, Floyd-Steinberg weights) are shown in Figure 1. The Gibbs sampler algorithm shows certain details better than the Ulichney algorithm. As a further demonstration, the upper left corner of the Pentagon image is shown in Figure 2; this shows that the Gibbs algorithm captures even more of the image detail, especially in the freeway at the upper left, than the Ulichney blue-noise error diffusion algorithm, which is already very good.

Previous algorithms for halftoning have not explicitly treated this as an inverse problem. However, their approach, considered from this point of view, does approximate the solution of this ill-posed problem. In particular, Ulichney's blue noise dithering algorithm with error diffusion performs very well.

- Figure 1 . Results of Gibbs sampler and Ulichney halftoning.

The Gibbs sampler is a known method for solving inverse problems. I have applied it to the halftoning problem, resulting in a good parallel algorithm for halftoning. The resulting images are of high quality, and compare favorably with the images produced by blue noise halftoning. The Gibbs algorithm does not suffer from many of the problems of the error diffusion algorithms. It is easy to modify it to take into account the special characteristics of display devices, for example the slow pickup of some monitors in left-to-right scans; all that is necessary is to appropriately modify the calculation of the Gaussian filter. The frequency of the Gaussian filter is independent of the underlying grayvalue. The algorithm shows no spatial aberration. It is computed locally, so there is no unusual global response to a local event. And the algorithm is highly parallel.

- Geman, S. and Geman, D. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*6(6):721-741 November 1984. - Jarvis, J. F., Judice, C. N. and Ninke, W. H. A Survey of Techniques for the Display of Continuous Tone Pictures on Bilevel Displays.
*Computer Graphics and Image Processing*5:13-40 1976. - Tikhonov, A. N. and Goncharsky, A. V.
*Mathematics and MechanicsIll-Posed Problems in the Natural Sciences.*MIR Publishers, Moscow, USSR, 1987. - Ulichney, R.
*Digital Halftoning.*MIT Press, Cambridge, Massachusetts, 1987.

- Figure 2 . Upper-left hand corner of Figure 1.