15-869 Assignment 2: View Transformation
I took the pixel projection approach to McMillan's view transformation algorithm. I also implemented his visibility algorithm of enumerating up to four patches of the image and diplaying them with respect to the center of projection.
Let me state that I spent two days trying to get the rotations right. I overlooked that the given depth values were represented as their inverses. Oh well.
Here view of the image with no applied transformations.
My hole filling algorithm was the "cheesy" method. I interpolated across bounded lines (up a threshold of 11 pixels) with the colors of the bound. I did the for the x direction then y. Here is an image before and after hole filling.
When I applied anymore hole filling beyound that, there were too many artifacts -- even at low thresholds. I found that artifacts beget more artifacts. And my hole filling algorithm is something of an archeologist. Here we can see artifact's on Elton's shoulder, and we can see how the algorithm does a poor job of filling blank lines that go through the entire image. I think that randomness incorporated in the line fill would have helped this.
But I thought that this image was good attempted at filling the hole in Elton's head.
Here's a view from behind.
I prefered to see the data represented with out the background. My filling algorithm left large gaps.
For my interface, I used mouse input to manipulate the rotations, translations and zooming. The scan button performed filled the holes. That process was to slow for real time manipulation of the data.
Enhancements: I could try to represent the data as a surface model and render polygons accordingly. I think that it would be neat, given this implementation, to flatten the polygons and produce a layered texture map. Also, I would have liked to implement a hole filling strategy that stays within the region of known depth values. That's it.