Stitching Photo Mosaics

Manual Correspondence Selection

Algorithm

Starting from the center, I add one image at a time to the panorama. This way, I can use correspondences between adjacent untransformed images to compute the transformations necessary to add each image to the panorama. I use an expanded form of the transformation equation H x = x' along with the lmdivide function to compute a least squares fit for each homography. As each image is added, the combined image is expanded to allow the new image to fit, and correspondence points for the remaining images are updated to account for the transformation being applied to the current image.

In order to blend the images, I simply average the values of all overlapping images at each pixel. The only obvious artifacts produced by this blending method in my sample panorama are from objects that moved from one image to the next. This is particularly visible in the trees. Aside from that, the images, which I did not use a tripod to capture, are blended nearly perfectly, with no visible borders. I used 24 points for each correspondence in the sample panorama.

Images

Automatic Correspondence Generation

Algorithm

I followed the provided automatic correspondence detection algorithm quite closely. I augmented the limited non-maximal suppression already present in the provided harris corners algorithm with a target point count and expanding radius. It will double the radius until the number of detected corners is less than the target number of points, and then go back to the previous radius. It then selects the strongest corners up to the target count.

In addition, when matching features, before using the RANSAC algorithm, I ensure that matches are mutual. That is, if patch A in image 1 best matches patch B in image 2, I ensure that patch B best matches patch A. In practice, I found this eliminated few points, since even incorrect close matches are still close matches.

One notable difference between the automatic and manual stitching is that while I did not select control points on the large trees, the automatic algorithm has somewhat optimized the matching of the tree branches above the blue house at the expense of the matching of the lowest portions of the house.

New Additions and Improvements

Image Rectification

Automatic Panorama Recognition

I implemented automatic panorama recognition under the assumptions that the panorama in horizontal and there are no images that closely match other provided images but should not be used in the panorama. The algorithm works by computing initial correspondences between the images on each side of the panorama and all candidates, and selecting the best match above a certain minimum number of correspondences. All new panoramas shown were constructed using my panorama recognition algorithm as a first step.

Laplacian Blending

Algorithm

I implemented a multiscale Laplacian blending algorithm with adaptive blend width. It blends each level of the laplacian pyramid of two input images over a range of one pixel, the width of features appearing in that level of the pyramid, as long as the blend will not extend outside of the overlapping area of the images. Where there is not enough space in the overlapping region, remaining features are blended linearly. This check is performed per-pixel in order to create an adaptive blend using the maximum possible number of levels of the laplacian pyramid at each pixel.

The blend region itself is computed using the alpha masks of the images being blended, allowing the algorithm to support any configuration of blend region. The algorithm computes per-pixel blend gradients consisting of the negative distance to the blend midpoint on one side (corresponding to the first image) and the positive distance on the other and blend lengths representing the shortest distance between unblended pixels in each image running through each pixel in the blend region.

I have included several images demonstrating this blend below. On the top right, I excluded contributions from the left-hand image, and on the top left I have replaced the left-hand image with a duplicate of the right-hand image. Below, I have the masks defining the region to be blended for each level of the laplacian pyramid. At small scales, holes appear around the top and bottom of the blend, since attempting to blend these regions at coarser scales would include contributions from outside the bounds of the blend region.

Images

RANSAC

Algorithm

In order to better support images with varying proportions of correctly matched points I now have the RANSAC loop decrease the proportion of required points throughout the search but weight matches with more points more highly. This successfully allows the algorithm to both find reasonable correspondences in images with large amounts of repetition (and thus false matches) and to avoid finding correspondences with unnecessarily few points on images with a good initial feature matching.

Cylindrical Panoramas

Algorithm

I implemented cylindrical panorama generation (including full 360 degree panoramas) with a global least-squares optimization of all parameters, including focal length. I assume that the camera is mounted at a fixed point, but can be rotated in any direction. This allows for correction of images where the horizon isn't exactly straight and images where the horizon is not located exactly at the vertical midpoint. The optimization is performed using Matlab's lsqcurvefit function.

I precompute an approximate focal length and image offsets, under the assumption that the images are perfectly level, which I use as starting points for the optimization. This allows the optimization to converge significantly faster than it would when starting at an initial guess of 0.

One difficulty I had with the global optimizations is that the algorithm was not always able to fit full 360 degree panoramas within 360 degrees. My initial construction of the optimization problem attempted to force that, but either due to difficulties in optimizing the focal length or vertical horizon angle (which I correctly implemented as a rotation of the image plane about the origin and not just a translation in cylindrical coordinates), the resulting panorama would often include images which overlapped slightly too far.

To make the optimization easier, I relaxed the requirement that the panorama fit in exactly 360 degrees. I then constructed the panorama in an image representing a similarly larger range of motion. While this modification did allow for the construction of near-seamless panoramas, it slightly reduces the correctness of the resulting mapping.

However, the flexibility did present one additional problem. When I tried stitching a 360 degree panorama consisting of vertical photographs, the optimal configuration was constructed as if the rotational axis of the camera was not aligned with the central axis of the cylindrical space. It seems that the difficulty, in general, with using a global optimization on a panorama is that the optimization problem must be constructed extremely carefully to allow just the right amount of freedom where it's needed, and nothing more.

With this relaxation, the cylindrical stitching algorithm did produce panoramas with fewer artifacts than I was able to achieve using translation and rotation within the cylindrical image space alone. With additional constraints and perhaps more careful camera calibration, this algorithm could be quite effective at constructing cylindrical panoramas.

Images

Additional Panoramas