Feature Matching for Auto-Stitching Photo Mosaics

For Computational Photography, 15-463 (Project 4)

Carnegie Mellon University                                  jmm59@pitt.edu

 

Image Rectification

First thing to do for this project was to compute homographies between pairs of images.  A homography is a transformation matrix which tells one image how to align itself with another image, based on correspondence points found in the image overlaps.  To test the homographies, I used them to ‘rectify’ images.  In my examples, I took pictures that contained square objects at an angle to the camera.  Then I transformed them so that the squares were once again ‘square.’

 

For this project, I automatically calculate the transformation for alignment between two images.  I will demonstrate each step with sample images. 

 

Given two images with a portion of overlap:

 

 

I first detect corners using the Harris Interest Point Detector (provided by our teacher).  This results in a very large number of points:

Next Adaptive Non-Maximal Suppression is implemented, which picks the strongest 500 corners in an even distribution across the entire image.

 

I then extract feature descriptors from each of the 500 points.  These consist of 8x8 blocks, sampled from a blurred version of the surrounding 40x40 patch of pixels around each point.  These descriptors are normalized for bias and gain to account for inconsistencies in lighting between the two images.

Here is the collection of all 500 feature descriptors for the second image:

Next, we compare each feature descriptor with potential matches from the other image.  The ratio of the bestMatch/secondBestMatch is used to determine whether these points are truly useful; if the ratio is less than a threshold (here we used 0.6) then the match is retained.

 

Our list of matches now looks very accurate, but a mismatch between even one pair of points can completely ruin the homography calculation.  Therefore, we first use RANSAC to calculate sets of matches that have near-identical translations.  The largest set then represents the points that are true correspondences, and the homography is calculated from this set.

 

In the following images, a point of any color is one of the evenly-distributed 500.  Blue and yellow points represent those selected as matching with a point on the other image.  And yellow points are those belonging to the set selected by RANSAC for the final homography computation.

Once our homography has been calculated, the program proceeds in the same way as our last assignment, transforming and stitching the images together to form a panorama.  The difference is that correspondence points no longer need to be determined by hand!

 

 

Completed Autostitched Panoramas:

I noticed that my algorithm had a progressively harder time finding correspondence between images the further the image is from the middle.  This is why the living room panorama does not include the far left and right images.  The most likely reason for this is that I add images one at a time, and each time I’m adding an image I compare it with the current mosaic.  Therefore the image it should match up with is already transformed, as well as only a portion of the entire focus.  Probably I would get better results if I were to compute homographies between each image that borders another, and then put them all into the mosaic at the same time (chaining the transformations down the homographies).