← back to the blog


Spatially Hashed Photon Mapper

My original post on the photon mapper plus the original code can be found [here]

Introduction

One of the problems faced in traditional ray tracing is global illumination. Though a certain degree of global illumination can be acheived in the traditional method (for instance, reflection, refraction, shadows), other effects such as caustics and diffuse-color-bleeding are not. Photon Mapping, a technique developed by Henrik Wann Jensen, resolves these issues. Whereas traditional ray tracing only accounts for the observer's perspective in the shading model, photon mapping incorporates both the observer's and the lights' perspectives. As such, effects that are easy to compute from the perspective of the light, such as caustics, and effects that are easy to compute from the user's perspective, such as specular reflections, and refractions, are all incorporated in the photon mapping shading model.

In this blog post, we outline a basic overview of Photon Mapping, and present our implementation of it, along with a discussion of how our implementation differs from that in most literature. We also include some of the output images from our algorithm, separated out into three buffers: the first depicting the original (ray-traced) rendered, the second with the photon map, and the third depicting a screen overlay blending of the first two buffers.

 

Photon Mapping Basics

Photon mapping is achieved via a two pass algorithm. The first stage actually computes the photon map by firing photons from each light source into the scene in much the same way as traditional ray tracing fires rays from the camera into the scene. The photons are stored in some data structure for later aggregation. Effectively, the second pass is a traditional ray tracer where the shading model incorporates cumulative photon intensities in some local neighborhood.

In Jensen's original photon mapping method, the photons are stored in a balanced axis aligned KD-tree and keep track of photon intensity, intersection positions, angle of incidence, and other metadata for future aggregation. During the second pass of the algorithm, when a ray intersects an object all photons in a local sphere are aggregated and the radius of the sphere increases until a predefined energy condition is met. Consequently, total number of photons launched into the scene, photon intensities, minimum sphere radius, maximum sphere radius, and minimum total energy are all hyper-parameters to tune.

 

Our Architecture

Rather than computing aggregating the photons in the second pass, we opt to aggregate them in the first pass using a spatial hash. By predefining a bucket size (or sphere radius), we can trade spatial precision and accuracy for speed. In fact, this method achieves O(1) photon map access time. Furthermore, we no longer have to worry about the near neighbor search since this information is encoded directly into the data structure. In the second pass, we compute the raytraced image and photon map image simultaneously, but save their values into different buffers. Since we use a spatial hash, our photon map image is rather speckled, so we apply a gaussian blur before screen blending it with the raytraced image.