Computational Photography Assignment 4

Alyssa Reuter - Spring 2010

Part I

Description

To implement my image stitcher, I first let the user select an equal number of corresponding points in each picture (any number of points can be used, but the more the better). From these points I calculate a homography matrix which maps pixels in the final image space (the same as image one) to their corresponding locations in the second image. Then I use the inverse of this matrix multiplied by the corners of the second image, and compared with the corners of the first, to determine the needed dimensions for the final image. This information is then passed to an image warping function, which uses the homography matrix to warp the second image into the projective space of the first. Finally I determine the pixels which are not being used by the first image withing the final image, set these pixels in the warped second image to zero, and add the two together to produce the final stitched image.

Results

I'm satisfied with the results of my panorama maker, and with the exception of the extra bells and whistles there are two main issues that may need to be resolved. The first problem is that it is tricky to pick out the exact pixels where you want to place the points of correspondence, and therefore the image warp isn't always exactly on the mark. This is especially true when working quickly, but in most cases if you take care when placing the points the warp will turn out fine.

The other remaining issue is image blending. In the interest of time, I ended up just placing the first (unwarped) image on top of the second (warped) image. Edges are fairly undectable if the photos taken in manual mode and the points of correspondence are carefully selected. Otherwise, the edges can be seen. For the second part of the assignment I'll try to implement a Laplacian pyrimid or some other blending technique to improve this.


images below from: http://hugin.sourceforge.net/tutorials/two-photos/en.shtml

Part II

Description

In part two, I have created an automatic image stitcher. My algorithm uses single-scale Harris to detect an even distribution of corners in both images, then creates feature descriptors for each corner by blurring the image, sampling values over a 40 x 40 window, and then bias/gain normalizing the results. Next I iterate through the image 1 feature descriptors, compare them against those from image 2, and then use the ratio between the errors of the two closest matches to determine if there is a correspondence. Finally, I run RANSAC to find a homography matrix that most closely maps the majority of corners in one image to their matches in the other image.

Results

I think the second part of this assignment turned out very well. In the above image I ran my stitcher on one pair of images (the center and right ones), and then ran in again with the results from the previous run and a new third photo to produce a three image mosaic. There is s small bug where the right side of the third image gets cut off where it meets the previous mosaic, which is because the black (not alpha) background of the mosaic is overlapping the third image. The automatic stitcher usually was able to pick better corresponding points then I was, and thus created better looking mosaics. However, once in a while it does include an outlier and the results are slightly skewed...this can be easily be fixed by increasing the number of RANSAC iterations, but my patience can only handle waiting for 20. Otherwise, the only other problem I encountered was the image below, which wasn't a very good panorama shot to begin with...I must have moved, making it virtually impossible to line of the hill in the middle ground, manually or otherwise.

image below creating using images from:
http://hugin.sourceforge.net/tutorials/two-photos/en.shtml