For this project, I attempted to reverse some of the work we did earlier in
the course. Earlier, we implemented affine transformations, projective
transformations, and other types of image warping. For this project, my goal was
to detect images that had been transformed.
The algorithm performs several comparisons, each designed to detect certain transformations to the image. First, I check the MD5 hashes of both image files, to determine if they are identical. This saves computation time when you compare an image to itself. Next, I perform histogram color analysis on both images. This can be tricky when working between two different color models (RGB, CMYK, and so on) but can often detect images where pixels have been moved arbitrarily or small changes performed. The next step involves computing the SSD between pixel intensities in the two images - again, this is useful for detecting small changes. It is necessary for this step to warp one image directly into the geometry of the other to align them (done using ImageMagick). Finally, I use the SIFT algorithm to detect point correspondences between the two images. Based on the correspondence points, I attempt to use a projective transform to map one image into the other, and perform another SSD test to see if this results in a good match.
The weight that the algorithm places on each of these components when computing the overall probability of a match is based on a machine learning method. I downloaded over 2000 images from Flickr to train the algorithm, and used ImageMagick to produce alternate versions of each, using various transformations, blurs, and scaling utilities, so that the algorithm could see how to weight the various probabilities (specifically, the algorithm is a Bayesian learner).