Computational Photography -- Final Project-- Kelleker Guerin

There is often interest in identifying geometry in a scene. Edge detection is an established method of looking at scene geometry, but naive edge detection in a complex scene discerns little about the physical geometry; light and shadowing effects especially in outdoor scenes can cause edge detectors to mistakenly classify texture and shadow edges as geometry edges. However, with a sample of several images of a scene where the illumination is changing over time, one can correctly classify different scene edges by removing lighting effects and textures. We present a method of using intrinsic images and time varying intensity gradient to identify geometry edges, texture edges and shadow edges. We first remove shadows, then look at per pixel gradient intensity variance to segment normals in the scene, and use the boundaries of those segments to find geometry edges. The texture edges then are the edges which have not been classified as geometry or shadow.

Geometry Edge Classification
For our implementation, we used a sequence of images taken from a University of Arizona webcam, and from an idealized scene constructed in the lab. For the Arizona dataset, images were taken ~20 min intervals for 50 images spread over the entire cloudless day. The idealized scene in the lab used 14 images. Both sequences vary in time with lighting moving in an arch over the scene. Our edge segmentation starts out by using Weiss’ method of taking X and Y derivative gradients of a sequence of images. We first take each image and convert to LAB colorspace, using the L channel for all of our implementation. We then create a matrix of all the images in the sequences:
M = [ H x W x N ]

This image matrix then allows us to create a 1 x N time varying intensity vector for each pixel location in the image. M(i,j,:) = {p(i,j)1 p(i,j)2 … p(i,j)N} We then take the X and Y gradients (derivatives) of the each image in the spatial domain, and combine these gradients using a sum of squared method. This gives us two indications of the pixel identity, one in the intensity domain, and one in the 2D gradient domain.
We can use the variance of these gradient intensity vectors to classify the edge type. By taking a variance measurement of each pixel vector and throwing away the top and bottom 25% of the data, we create an outlier rejecting variance. By taking a threshold of this variance, we can only select the pixels that correspond to edges which are ideally geometry edges