Humans have a rather large field of view. We can see almost 180 degrees of the world in front of us. Most cameras, on the other hand, have a rather limited field of view. The image that is captured is only a fraction of the landscape in front of the photographer. This is where Panoramic Mosaicing comes in to play.
The understanding that as long as the center of projection of the camera stays the same, any set of pictures can be placed next to one another to produce a much wider field of view. This allows for wide scale images in tight places or of vast landscapes. This process is implemented in two ways for this project. First, the concept is tested using user defined points. Two sets of correlating points are selected by the user to identify what features correlate to each other in the two images. Then a prospective warp is done to 'flatten' the image, thus making it more like our own perspective. Then the two images are blended together.
It turns out humans are not that great at picking out correlated points, well not accurately. So the second part of the project is to automate this by some fancy feature detection and matching algorithm. The final code is a program, that given two images will align and warp the two images then stitch them together. If a larger composite image is needed, then the output of one run of the program is piped back in to the input of the program, building up the image.
A bonus to this project is a 'Tour of a Picture.' If the assumption is made that an image is of a rectangular room, it is possible to pull apart an image of the room and compute the geometry to display in a 3D manner. Words don't describe this very well, just go to the end of the output section so see more!
For further reading:
The Original Project Description.
The Original Paper On Image Mosaicing.
The Original Paper on the Tour of an Image.
|
|