The goal is to match the color channels. Due to the way colors work, I could not assume that the R channel was going to be high when the B or G channels were high. Therefore, instead of matching by intensity, I looked for areas where the color intensities changed. I did this with horizontal and vertical Sobel filters.
First, I needed to normalize the values from the Sobel filtering, so I squared them so that edge = high value. I divided the output by 4 to normalize it (found this value somewhere on the internet, not too sure how it works but it makes everything between 0 and 1). To further emphasize the edge, I transformed it to black/white using a threshold. The displacement is calculated by taking the least sum of square differences between the B and R or G channels.
Using a image pyramid which scales images 50% on each axis per level, I first used a 31x31 window on the coarsest image to find an approximate displacement. Then, I applied the algorithm on the next coarsest level of the pyramid using the previous displacement to narrow the search window.
Original | Picture | Offsets (green and red) |
---|---|---|
Link | (0,51) (-11,123) | |
Link | (-5,37) (-11,89) | |
Link | (26,46) (31,109) |