Computational Photography - Assignment 1 - Alex Limpaecher

Background

In 1907 Sergei Mikhailovich Prokudin-Gorskii traveled Russia taking color photographs. While the technology did not exist to take color photographs, he captured three exposures by using red, green, and blue filters. For this project I have algorithmically taken those three exposures and assembled them into full color photographs.

Approach

My approach to aligning these images was to align the edges of the three different exposures. This approach was highly successful and was able to align almost all of the images seemingly perfectly.

Extracting the Exposures

Sergei's exposures are provided on digitized glass plate images, with each exposure stacked on top of the other.

Glass Plate

To get exposure I simply split the photo into thirds to get each one of the exposures. Then these exposures could be recombined to form a color image.

Edge Alignment

In order to find the best alignment I was originally scoring alignment based on the Sum Square Difference and Normalized Cross-Correlation of the three different exposures. However this often did not return the very best results. This is because the three exposures are supposed to have different values. In cases where one exposure was supposed to be different (like a blue sky or a red carpet), the alignments would score very poorly.

However in one way that all the images should align was their edges. I used Matlab to do Canny edge detection on each exposure. To allow for some slack (not knowing how perfectly still the camera was between exposure shots) I blurred the edge detection with a gaussian filter. I then used the same Sum Square Difference in order to align these edges. One example can be seen below:

Red Exposure Edge Detection Green Exposure Edge Detection Blue Exposure Edge Detection Edges Aligned Final Image

Comparing Edge Detection vs. Color Exposures

The edge detection was considerable better than comparing color exposures. Here is an example:

Alignment using color Exposures Alignment Using edge detection

Edge Detection Failures

While the edge detection method worked really well, there were one scenario where it did not work.

Red Exposure Edge Detection Green Exposure Edge Detection Blue Exposure Edge Detection Edges Aligned Final Image

If I had to guess why this went wrong, it would be for two reaons. One is that the image is so heavily blue, that in the sky and in the ocean, that the three exposures are so different that the same edges don't seem to appear. Second is that since it is mostly just a horizon shot, shifting the photo horizontally doesn't really effect alignment.

Image Pyramids

In order to speed up the alignment, I used image pyramids. I took the original image and recursively resized it by 50%. I then ran SSD on the lowest resolution image to get the offset, and then translated that offset to the next larger resolution. This sped up the alignment quite a bit.

Results

Image Green Alignment Blue Alignment
(2,-5) (1,-9)
(-1,-3) (-5,-5)
(0,-6) (0,-12)
(-1,-3) (-3,-4)
(1,338) (1,336)
(1,-12) (2,-13)
(0,338) (-1,-4)
(0,-6) (0,-12)
(-2,-8) (-4,-14)
(-19,-53) (-39,-95 ) tiff
(-14,3148) (-55,3094) tiff
(-13,-17) (-37,-51) tiff
(-42,-27) (-30,-85) tiff
(-8,-34) (-14,-49) tiff
(-8,-67) (-124,-32) tiff
(-1,-27) (476,-16) tiff
(-9,-25) (-14,-11) tiff
(-14,-47) (-33,-72) tiff

Bells and whistle.

In addition to using edge detection I also wrote a crop algorithm which would clean up the edges. It looks at ever row and column, and it the mean SSD of the different color channels is high enough, it removes removes that row or column. Here is an example:
Before After

Other results