by Sang Tian
The purpose of this project is to combine a collection of photos to form a larger photo with as much automation as possible. There are several stages to this project that makes it work. First, images needed to be morphed so that they align properly. We will describe the alignment process in more detail later. Afterwards, the images need to be blended seamlessly, and to do this, we use a Laplacian pyramid blend.
To morph an image, we use a homography matrix. By plotting points over an image, we can move those points to other places, and a corresponding matrix can be formed to automatically transform this image into a different perspective. For example, below are two images. The first one is the original, whereas the second one has the ``floorplan'' rectified into a square.
Once images are morphed into the right positions, we need to seamlessly blend them to produce a good effect. To blend, we used a Laplacian pyramid to blend two images together. Below, we show you the results of blending a day and a night photograph of the Eiffel Tower using the Laplacian pyramid blending technique. Note that the two images are not entirely aligned, so the night image was morphed into the day image (note: this, and the Laplacian pyramid, are both for the Bells and Whistles required for this project).
Note: although the Laplacian pyramid blending is theoretically much better than a simple gradient ramp since it preserves high-frequency features, none of the image blendings used in my project produced noticeable improvements when the pyramid depth is increased or decreased. The reason is most likely because the images being blended were already of very similar, so a regular blending would produce the same results as a Laplacian pyramid blending. All images produced on this website were made using a 3-level pyramid except for the manual stitchings of the panoramas below, which did not use a Laplacian pyramid. The results are evidently very similar.
In order to automatically morph two images together, we had to automatically compute corresponding control points between two images. This is a four-step algorithm:
Below is an example of our automatic feature detection at work. The red points are the Harris corners. The yellow points are the points that also survived the ANMS. The green points are the points that are also detected by the feature matching. And the blue points are those that also passed RANSAC (there are only two green ones that didn't pass; they're in the sky).
Below are three panoramas made by the photo stitcher.
The edge images could not be auto-stitched since they were too distorted for feature matching to work.