Here are three mosaics I created. They are all pictures I took on my iPhone 4. I used a simple blending method by splitting the images into two parts, a high-frequency part, and a low-frequency part, and then blending the low frequency with a wider dissolve. I in fact use a mask (4th channel) but all I did with it was make a vertical transition so if I made nicer masks the edges would be even more seamless.
Here are two rectifications I did. The second picture I got from flickr. The first image is the original and the one after which looks distorted is the rectified image. For the cereal box I tried to square out the "Heart Healthy" panel on the side ofthe box. It worked pretty well. For the second picture I tried to rectify the text but it warped the rest of the image tremendously and it's kinda hard to see the text. You can open the image separately to see the text closer and it does make it easier to read.
After computing the Harris corner intensity value across the images, we use non maximal suppression to obtain a series of "interest points" which are to be matched up. 40x40 pixel patches are taken and downsampled to 8x8 descriptor vectors, and used to correlate interest points across pictures. RANSAC was then used to ensure outliers did not screw with our homography estimation.
These are the feature points generated on this picture by adaptive nonmaximal suppression on Harris Corner intensity. Note the even distribution.
These are the feature points which pass the best/second-best ratio test for feature descriptor matching. Notice most of the ones that are not overlapping have been removed by this step.
Here are three mosaics that were generated automatically without me entering correspondence points.
I realize It's a bit lame that I only have two-picture stitch results, but I never got around to making a routine that handles more than two. It would be trivial to implement, of course.