Automatic Image Mosaicing

Xuewen Chen

Sept. 19, 1999

In this assignment, I took two pictures, and then created an image mosaic by automatically registering, projective warping, resampling, and compositing them.

1. Shoot and Digitize Pictures

I used a digital camera to shoot two photographs of the front door of ECE building in CMU. What I did was to shoot the two pictures from the same point of view but with different view directions. I did not use a tripod to fix the camera. Instead, I just moved my hand with camera as slow as possible (with the hope that the only change is view direction), therefore, the transformation between these two pictures may not be a perfect projective, considering that the scenes are non-planar. The digital camera can then be connected directly with a computer to output the color images with size of 864 x 1152 x 3.

Here are the two images (click the images to enlarge).

 

                       

image 1                                                                     image 2

2. Register Images

Image registration we implemented is fully automatic. First we used phase correlation method to compute the initial rough displacements, which will be used as our initial conditions. We then used Levenberg-Marquardt iterative non-linear minimization algorithm (L-M algorithm) to find all parameters needed. Details can be found on the paper Image Mosaicing for Tele-Reality by Sszeliski. Total running time in this stage is about 3 minutes.

2.1 Phase correlation

Images were first converted to grayscale, then down sampled by a factor of 2 (so as to reduce the running time). We then extract part of the first image as our filter to perform phase correlation with the second image, and to search the peak in the magnitude image. Following is the plot for the peak versus position in horizontal (y) direction (click the picture to enlarge).

We chose the initial parameters (m0 to m7) by assuming that the only difference between the two images is displacement and used the estimated displacements as our initial value of m2 and m5. Then we can map pixels from one image to the other. From the following images, we can see that this technique can give a good estimation of displacement (bright red spots are the correspondence points).

image 1

 

image 2

 

2.2 Refine the parameters

After we choose the initial parameters in stage 1, we then use the L-M algorithm to refine the parameters by minimizing the sum of the squared intensity errors (MSE). We will use a region containing 201 x 301 pixels around the red spot in image 2 to compute MSE. After ten iterations, we find the minimum MSE. The corresponding refined parameters (m0 to m7 ) will be used in the warping and mosaicing stage. Following image shows the red spot in image 1 mapped from image 2 (red spot shown in above image) based on the refined parameters. As we can see, the result is pretty good.

image 1

3. Warp and Mosaic the Pictures

First, we transformed the corners of image 2 to determine the size of our output image. Then image 1 was unwarped and image 2 was warped into its projection. Bilinear interpolation is used to interpolate the R, G, and B colors. Finally we computed the weighted average at each output pixel for R, G, and B channels separately, and composited them to a mosaic. Total running time needed in this stage is about 18 minutes.

Here is the mosaic image with pixels 869 x 1582 x 3. (Click images to enlarge).

 

mosaic

As we can see, the result is good. But we still can see some artificial. For example, the bottom stair is curved.

NOTE:

1. Total running time of this program is about 21 minutes (Pentinum II with 96MB RAM and 350 MHz ).

2. I tried to use mosaic.cxx code in andrew.cmu.edu (wean cluster). The compilation was successful, but the program did not work properly. It showed three grayscale images (each was a shrank version of the image supposed to be displayed). This may be the problem of OpenGL or FLTK library. For the time was limited, I turned to use matlab. Thus, all codes wrote are MATLAB.