Image-Based Lighting

The goal of this project is to seamlessly render 3D models into into photographs using image-based lighting techniques. Image-based lighting is 3D rendering technique that uses measured scene radiance to composite objects with correct lighting. The sphere mirror approach described in the SIGGRAPH 98 paper by Debevec constructs the image-based model using an omni-directional, high-dynamic-range image (light probe) obtained from a spherical mirror.

Recovering the HDR Radiance Map from LDR Images

Source images

Exif metadata and original scenes are available here.

Log radiance

We can estimate the radiance map (rather than the intensity) of the original photos. Debevec’s paper solves for the irradiance in the response function where is the exposure time. The following is the recovered (rescaled for visibility) map for each exposure. Ideally, they should all look the same because the only varying factor is the shutter speed . And we can see that is indeed the case, except for the some overexposed pixels in the first (1/20s) photo.

In the naive and weighted merging stages, since we assume the response function of the camera is simply , the log radiance map should look the same as the raw pixel intensity.

HDR log radiance

By averaging the radiance maps from different low-dynamic-range exposures, we can obtain the precise, HDR image which represents a greater range of luminance levels. We can use this HDR radiance map for our image-based-lighting model.

Recovered HDR log radiance maps. Left: naive average, middle: weighted average, right: using the estimated response function.

Panoramic transformation

The image-based-lighting format that Blender (the rendering software used here) accepts is the equirectangular projection (commonly used for the projection of the world map). My approach was to compute a mapping from the spherical soordinates to a pixel value (basically convert to xyz Cartisian and ignore one of the dimensions), capture pixel values in the intervals, then interpolate on the 2D plane so that respectively correspond to .

The following is the radiance map transformed to the equirectangular domain.

Figure: an illustration of how the sphere model captures light from all directions. Source: CS498DH lecture slide, probably originally from a SIGGRAPH 2004 presentation.

The equirectangular projection of the radiance map imported in Blender.

Rendering

Following the method by Debevec, we need to first render two scenes with and without the objects. Then, after copying the objects to the target image, we can adjust additional lighting conditions (shadows for example) obtained by subtracting the two scenes.

Objects downloaded from the Google 3D Warehouse:

Bells & Whitles

Photographer Removal

In order to remove the photographer image reflected on the sphere, a patch priority-based inpainting method from the Criminisi et al. paper was implemented in MATLAB. We run the algorithm on all 10 LDR images and re-estimate the HDR radiance map.

Animated illustration of the object removal (if you can’t see this go to the gfycat.com hosted link here):

The algorithm decides which pixels to fill next based on the gradients of the surrounding pixels, so that any incoming edges (the door frame in the example above) can be filled and propagated first.

Photographer removal. Before and after:

Other panoramic transformations

Additional panoramic transformations (explained here) have been implemented.

Above: angular projection

The vertical cross transformation is less intuitive to implement. My approach was to convert the spherical coordinates to three azimuthal angles on the XY, YZ, and XZ planes. Two of those angles can be used in one of the cubic surfaces as x and y 2D coordinates.

The following is a visualization of points on the quad sphere and the corresponding 2D point after the transformation.