Final Project: Time Lapse Relighting of a Scene

Danielle Millett

The idea of my final project is to remove the current sky in an image, and relight the image using an HDR image of the sky. Using HDR images of the sky at different periods during the day, it is possible to create a series of relit images at different times of day. These images can then be strung together into an animated time-lapse image of the sun moving throughout the day.

Because an image is only a 2D representation of the scene, I'm using a similar method to tour into the picture to create a 3D representation of the image. This 3D representation will have a floor, left and right walls, and a back wall, but no ceiling (because it is assumed to be an outdoor scene). The sky in the original image is removed using an alpha channel, and it is assumed that the sky was a constant illumination. Using this the illumination from this sky is subtracted out from the model, allowing for a new sky to be inserted and relight the image.

An ideal image would be an image of a city on an overcast day, with a single point perspective. It also would have the tops of buildings in the image so that the shadows can be determined better.
Here are some examples of this 3D construction from the original image:

Original Image Alpha Mask
User selected back plane and vanishing point Calculated planes for each wall
3D Representation of the city with sky removed


To relight the scene I used HDR light probe images from Jean-Francois Lalonde's research. Each of the light probes were rendered from a web camera image at different times of day. I used 151 images over the course of the day, and a sampling of these images can be seen in tone mapped versions below. The colors aren't entirely accurate due to distortions when tone mapped to rgb from hdr.

skyProbe1.jpg skyProbe2.jpg skyProbe3.jpg skyProbe4.jpg
skyProbe5.jpg skyProbe6.jpg skyProbe7.jpg skyProbe8.jpg
skyProbe9.jpg skyProbe10.jpg skyProbe11.jpg skyProbe12.jpg
skyProbe13.jpg skyProbe14.jpg skyProbe15.jpg skyProbe16.jpg


Using formulas from Paul Debevec, I converted these light probe representation into spherical world coordinates. An example of this can be seen below:


Now that I have both a 3D representation of the image, and a representation of the sky that can be used the light the picture, the core part of the project is calculating what part of the sky is visible to each pixel. For each pixel I created a visibility map, which was the same size as the sky light probe image. For this I set up a 3D model of the image with the sky surrounding it. I made the center of the sky dome the center of x and z coordinates of the 3D representation of the image. To make the sky be arbitrarily far away I made the radius of the dome be 50 times as large as the largest dimension of the 3D image model. By doing this the angles between the points would be almost the same as the angle from the sky to the center of the dome.
To create the visibility map for a given point in the image I did the following:
1. Cast a ray from the image point in 3D space to each pixel on the sky dome.
2. See how the angle of this ray compares with the normal at this point. This was caculated using rayDir dot normal
  -If this value was less than zero, then the ray is coming from behind and therefore does light the point.
  -Otherwise the ray may light the point if it's not blocked by another surface.
3. Check that another wall doesn't block this ray
  -Loop through each of the 3 other walls and see where the ray intersects the plane each wall is in.
  -If it intersects the plane before hitting the pixel, then check if it intersects where there are buildings.
  -To check this I passed in a coded image that had each wall represented by a number and any part that had alpha=0 represented by a 0 (an example can be seen below).
4. If the ray isn't obstructed, then put the value of rayDir dot normal in the visibility map.
  Otherwise, put a value of 0 in the visibility map.

visibility map


Results from this can be seen below:
Coded image for each wall and alpha=0 Coded image overtop of original image

Here is the visibility map for a given point (400,425) as marked on the left image below. The visibility map is shown at multiple different rotations, and the color coding goes from blue=0 to red=1.0. You can see that the high points in the sky have the most direct visibility to the floor point, and the more toward the horizion you go, the shallower the angle, and so the less visibility. You can see the outline of where the buildings block the sky from this point.

Relighting the Image

To relight the image, I used the visibility map and the HDR sky to determine how much each point of the sky lights the image. Since light is additive, I just summed over each sky pixel's contribution to get how much light reaches a pixel in the image. To determine the pixel color I multiplied the amount of light reaching a pixel by the original color of the pixel. Then the new sky is placed in the image instead of the original sky resulting in a newly lit image. The skies that I inserted into the images were Jean-Francois Lalonde's original skies taken from web camera images. Using multiple HDR skys over time, I created an animation of the sun moving throughout the day.

Image Results

Here is a movie of the final results over the course of a day: