Image Panoramas

Greg Methvin (gmethvin)

This project attempted to create panoramas by transforming one image onto the plane of another and stitching the images together. A variety of techniques were explored and tried to do this.

Process

Panorama Stitching

First, we have to compute a homography between two images we wish to stitch together. We can do this by using the singlular value decomposition and the specific method I used is detailed in the code. With this transformation matrix we can then compute what we expect the new image dimensions for the "morphed" version of the image and create a new image. We then proceed through the image to find the values of each new pixel. If the matrix shifted the resulting image, I removed any horizontal or vertical black borders and returned the amount by which we shifted.

At this point we have a transformed version of an image, which itself might be useful if we are trying to view an image from a different perspective. However, the important part here is that now we can line the images up with the original points we have defined using the shift parameter we stored. To combine the images, I used binary masks for each image which were warped and shifted in the same way. In the area where the masks overlapped, I used the distance of the pixels from each of the images to weight how much each pixel counted. This helped prevent jagged edges resulting from slight color and lighting differences between the images.

Automatic Correspondence-Point Finding

The second part of the project was to automatically find interest points to generate a homography automatically, without having to define the points ourselves. To do this, we started with finding corners using the Harris corner detector. This returns a large number of points, and we generally want to find the local maxima in this distribution, so as to produce a set of corner points distributed equally throughout the image. To do this, for each corner point we find the radius for which it is the "corneriest" corner, and then take the points with the largest radii. I chose to take the top 250, but in particular cases more or less may work.

The following images show the points found by this first step of the algorithm:

Next, once we have these points we must match them up in some way. To do this step, we generate feature descriptors for each point which describe the area around the point. I chose to take a 30x30 descriptor and resize it to create an 8x8 descriptor. Then we can take the SSDs between the descriptors of the points in two different images to match up the points. To make sure we get a good match, we only select matches where the second best match is significantly worse than the best match. For my algorithm, I found a ratio of 0.5 between the SSD of the nearest neighbor and the second-nearest neighbor to be the best compromise.

Once we have all these points, we need to find the best points to use to generate a homography between the images. To do this, we used RANSAC to select random samples of four points to generate a homography and find the one that had the most inliers. For our purposes, we considered in "inlier" to be a point for which the transformed version of the point is less than 2 pixels distance from the point on the second image.

At this point we get a set of points like shown below:

RANSAC will return some subset of the points, which we can then use to generate a final homography and feed into the panorama-stitching algorithm described above to produce an image like the one below:

Bells and Whistles

I implemented the Laplacian blending method described in class. My original simplistic blending method worked reasonably well, but I wanted to see what improvements using the Laplacian pyramid could get me. I experimented with the number of levels of the pyramid and found that 2-3 is generally enough to get rid of whatever edge artifacts there might be. Depending on how well-aligned the images were originally this can vary in the amount it actually improves the image.

I also implemented multi-scale patches. That is, I changed the window I used to create the descriptors so I could attempt to match on both large areas and small areas. This allows me to match features which may be large in one image and small in another, or which may only have unique properties for either small or large patches around them.

Problems

The main problems I had were finding the right images to try these on and selecting the points properly in the first part of the project. This was fixed in the second part with the automatic point finding. There is also significant loss in quality when the images are morphed to a size much larger than the current size. This could potentially be fixed by scaling the images to a lower resolution.

I also believe that some of the alignment problems I had were because of the fact that I did not take the pictures from exactly the same point. Since some of the pictures were at close range and I did not use a tripod, this may have caused seams in some of my panoramas. This happened less often when stitching images taken from farther away.

Another issue I encountered was with reflections and lighting from windows and reflective surfaces when I took multiple images from different angles. For the most part this is not a big deal, but you still can tell that the images were stitched if you know what to look for.

Results

I have provided some samples of my results below

Image Rectification

The following are some examples of transforming images so the plane is (roughly) frontal-parallel. This demonstrates the first step of the process required to stitch separate images together.

St. Regis Hotel in D.C.
Empire State Building

Panoramas

Because the angles at which I took the pictures were not perfect, and possibly also since I wasn't great at selecting correspondence points, some of these images show a clear seam and did not align completely. Still, ignoring the visible seam you can see that it comes pretty close most of the time.

Bridge from GHC to NSH
Outside at Night at CMU
More NSH Bridge (3 images)
Pausch Bridge

Autostitched Panoramas

The second part of the project was to do the panorama stitching automatically. Here are a few panoramas I generated from that. As you can see, my algorithm is a lot better at finding correspondence points than I am, so I think overall the results are better.

Pausch Bridge
Pausch Bridge at Night
Doherty Hall from Pausch Bridge
Pictures from the Duquesne Incline
Bridge from GHC to NSH
People at the Rally to Restore Sanity (with some blurring from moving people)