15-869 Image Based Modeling and Rendering
Programming Assignment # 2: View Interpolation
       Kiran Bhat, kiranb@cs.cmu.edu

For this assignment, I scanned a model of a bunny rabbit with the Vivid scanner. I implemented the entire algorithm in Windows NT platform. I converted the invertor files that I scanned into text files (in an SGI machine) and read the text file from my program. Since I did not have any prior experience with FLTK, I decided to implement the user interface using GLUT. My program takes input from the keyboard (location of the new camera center and its orientation) and the mouse (to display the rendered image).  My warping algorithm maps a pixel in the input image (source) to a pixel in the destination image, and hence holes are generated by this method. To fill these holes, I average the pixel values over a rectangular block. The size of this window is adaptive to account for larger holes.

The image used by my algorithm is shown below.

The scanner 3D data for this image has points on the bunny and the log beneath it, but not on the background and the table.

Some results of my algorithm are given below. The sequences are rendered with the camera centers translated along the x (range -100 to +100), y (range -50 to +50) and z axis(-50 to +300). Rotations are specified by directly changing the values of the projection matrix of the destination image (P2). Even though this technique is not very user friendly, it gives a good feel of the capabilities and limitation of this algorithm, and view interpolation, in general.

Translation to the left and down (without filling holes)

Translation to the left and down (filled)

Zoom Out

Larger translation to the left.

Zoom In

Translation to the right and up.


Possible improvements:

1) A better scheme to fill the holes in the rendered image.
2) A better user interface using FLTK.