New Ideas in Raytracing

September 29, 2012 · Theory · 5 comments

I’m no expert on raytracing, but I do try to keep up somewhat with research in that field. In the past few years there have been a couple of big new ideas floated that I’m quite intrigued by: stochastic progressive photon mapping (SPPM), and divide-and-conquer raytracing (DACRT).

These two ideas are quite different, and are orthogonal to each other, being concerned with different aspects of the raytracing problem. However, they have a goal in common: they both take aim at the memory consumption problem of traditional raytracing. SPPM—and other progressive photon mapping methods more generally—cut down the memory used to store a photon map for lighting. DACRT cuts down (or indeed eliminates altogether) the memory used to store acceleration structures for scene geometry. And both methods do this by exploiting the same insight: they’re explicitly designed to process large batches of rays as a group, rather than considering rays to exist in isolation. The traditional approach to raytracing tries to optimize for any possible set of rays, but SPPM and DACRT are both designed to respond to the set of rays you actually have.

Taken together, SPPM and DACRT raise the tantalizing possibility of a raytracing architecture that can operate with much less memory than previous approaches—not very much more than the memory needed just to store the scene description and the output image. Moreover, the accuracy of the resulting image need not be limited by memory, only by computation time and numerical precison.

Disclaimer: if you’re an expert in raytracing, none of this will be news to you, I don’t claim to have made anything like a complete survey of the work that’s been done, and I’ve glossed over plenty of technical details. That said, if I’m egregiously wrong about something here, please do let me know in the comments. :)

Why Photon Mapping?

I’ll discuss SPPM first—but before I can do that, I need to say a bit about regular photon mapping and progressive photon mapping (PPM), and to get there, I need to first mention path tracing.

Path tracing, invented by Jim Kajiya in 1986, is a fully general rendering method that is in principle capable of simulating all illumination and imaging phenomena that exist in the geometric optics regime—reflection and refraction (sharp or blurry), caustics, soft shadows, indirect lighting, defocus and motion blur, internal lens reflection, you name it—in any scene that obeys the laws of physics. It works by generating and averaging random samples of the set of all paths light can take through the scene, from a light source to a pixel on the image, with any number of scattering events between. The only source of error (aside from bugs in the implementation) is the statistical variance introduced by the random sampling procedure, which decreases in a predictable way as you accumulate more and more samples. If you let the algorithm run long enough (and you calculate with sufficient numerical precision), you’ll eventually get a result that’s noise-free to the precision you need.

Although path tracing can theoretically sample all possible light transport, it doesn’t necessarily sample all paths efficiently. Some scenes with complex light transport will generate a large amount of statistical variance, as most paths will carry only a small amount of light, but a small fraction of paths will carry large amounts of light. This greatly increases the number of samples, and therefore the computation time, needed to get a noise-free result. An example of the sort of thing that can cause problems is an SDS path. Here, S stands for specular and D for diffuse. An SDS path is one in which the light travels to the image via (at least) three bounces: a specular one, a diffuse one, and another specular one. For example, consider a glass lens focusing light into a caustic on a diffuse wall, which the camera then sees via a mirror reflection.

This is a very difficult case for path tracing because it depends on two specular paths, from the light source and the camera respectively, hitting the same point on the diffuse surface. If the path is built starting from the camera, it is very unlikely that a random direction sampled from the diffuse BRDF will hit the lens at just the right spot to make it to the light source. Even bidirectional path tracing doesn’t help here, since it is likewise unlikely for a path from the light and a path from the camera to join up with each other. Each half of the path can be built easily, but it is very unlikely to build just the right two halves at the same time.

Photon mapping, developed by Henrik Wann Jensen in 1995–96, solves this problem by building many light paths ahead of time and caching them in a large data structure. Then, rendering proceeds by normal path tracing, except that when lighting a diffuse surface, indirect illumination is estimated by the photon map instead of by tracing more of the path. Since we cannot expect photons (light paths) to land precisely at the points we wish to query, we estimate the nearby photon density by summing the 50–100 or so photons nearest to the query point. This means that indirect illumination is effectively blurred, with a radius that depends on the local photon density. Where photons are thick, the radius will be small; where they are thinly spread, the radius will be large.

Crossing The Streams

Decoupling the light paths from the camera paths solves the SDS problem, as well as generally speeding up renders with complex lighting, since each photon can be reused for many camera paths. However, it introduces another problem: a quite large amount of extra memory—often hundreds of megabytes—is required to store the photon map. Moreover, since only a finite number of photons can be kept in memory at a time, the error in the output image no longer goes to zero as rendering time increases. For the sampling process to converge to the mathematically-correct result, we would have to increase the number of photons without bound as rendering goes on.

What about batching the photons—that is, shoot some photons, render for awhile, then shoot a fresh set of photons and render some more, etc.? This doesn’t work either, because of the blur radius based on the local photon density. This procedure will converge to an image depicting a blurred version of the indirect lighting, where the blur radius is determined by the number of photons per batch, which is limited by the memory available to store them. In practice, of course, indirect lighting is often soft anyway, so a somewhat blurred result may be fine for many applications; nevertheless, we’d still like to do better.

A new idea in this direction was introduced by Hachisuka, Ogaki, and Jensen in 2008, as part of their progressive photon mapping (PPM) algorithm. PPM turns the classic method of photon mapping on its head. Where the classic method caches finitely many light paths, then “streams” (builds, evaluates, and discards) an unlimited number of camera paths, PPM operates by caching finitely many camera paths, then streaming an unlimited number of photons. The interesting thing about this is that it allows us to maintain a current blur radius for each camera path, and as more and more photons are accumulated, that radius can be decreased. With some care for the details, we can ensure that the number of photons within the radius grows without bound even as the radius goes to zero, so that the result converges to the true indirect lighting, without blur or variance. The current blur radius and other state maintained per camera path are referred to as “local photon statistics” in the literature.

However, PPM only calculates the correct result for each of the finitely many camera paths used to initialize the algorithm. The camera paths all have to be stored in memory at once, so we have a turned-around version of the same old problem. This is an issue if you want to use effects like defocus blur or glossy reflection, which may require very large numbers of camera paths.

The next logical thing to do would be to see if we could somehow stream both the camera paths and the photons. And indeed, Hachisuka and Jensen figured out how to do this with stochastic progressive photon mapping (SPPM), introduced in 2009. SPPM alternates between camera batches and photon batches: first we build and cache a bunch of camera paths, then we accumulate a bunch of photons onto those paths; then we build a fresh set of camera paths and accumulate a fresh set of photons, and so on. The only data preserved from one cycle to the next is the local photon statistics for each pixel in the rendered image. In their paper, the authors show that the per-pixel blur radius can go to zero, even as both the photons and the camera paths are periodically being replaced. It is an elegant and satisfying result. Moreover, we need only enough extra memory for one batch of camera paths. The algorithm will converge faster with large batches, but still functions correctly with small ones.

To recap: path tracing requires essentially no extra memory beyond the scene (including any acceleration structures) and the output image, but can be very inefficient in some cases of complex light transport, such as SDS paths. Classic photon mapping and PPM can handle those cases much more efficiently, but at the cost of requiring large amounts of extra memory—and the better the image you want, the more memory you might have to spend. Finally, SPPM achieves the best of both worlds—bounded memory usage plus the power of photon mapping for handling complex lighting.

Quicksort for Raytracing

That’s enough about photon mapping for now. Besides SPPM, the other big new idea I’m excited about is divide-and-conquer raytracing (DACRT). Far from such elevated affairs as illumination models and efficient sampling, DACRT is concerned with the bedrock of raytracing: efficiently locating intersections between rays and the scene geometry.

For most of the history of raytracing, the only viable known approach to intersection testing in a large scene was to build an acceleration structure of some sort. There are various kinds of acceleration structures people use, but they all involve subdividing the space occupied by the scene, often organizing it hierarchically, and grouping the triangles or other primitives into the subdivision cells. We can then cheaply test a ray against the bounding volume of a cell, and only if it passes do we proceed to the detailed intersection test with the individual primitives. Therefore, in many cases, large volumes of the scene can be skipped for a given ray, and it need only be tested against a fraction of the primitives.

In DACRT, no acceleration structure is built in advance. This approach takes a flat list of geometric primitives and a flat list of rays to intersect with them, and recursively subdivides both, grouping the rays into those that intersect the bounding volume of each portion of the subdivided scene. The recursion continues until either the set of rays or the set of primitives is small enough, in which case it passes to testing rays against individual primitives. The overall structure of the algorithm closely resembles that of Quicksort—the paradigmatic divide-and-conquer algorithm.

DACRT is exciting because it eliminates the large amount of memory—again, often hundreds of megabytes for a large scene—that a traditional raytracer would have to devote to storing the acceleration structure. DACRT does build an acceleration structure implicitly in the course of its execution, but this is never stored in its entirety—only a small stack of nodes awaiting refinement need be stored, and the actual ray and primitive data can be sorted in place. Moreover, the entire scene does not need to be subdivided equally, as in a precomputed acceleration structure; DACRT will subdivide less in parts of the scene where there are few rays. Therefore it can spend its time on the parts of the scene that matter most.

Moreover, DACRT is already competitive in performance versus a traditional raytracer, if the time to build an acceleration structure and render one frame are both counted. (It will not be a win if the acceleration structure can be built once and re-used for many images—but this scenario is becoming less relevant, as highly dynamic scenes require rebuilding the acceleration structure each frame.)

There has been an enormous amount of research into acceleration structures over the years, so it is a very well-understood problem. In light of this, it’s slightly shocking that such a simple idea as DACRT apparently wasn’t thought of until 2011, when it was independently introduced by Benjamin Mora (thanks to Stephen Hill for the link) and Keller and Wächter. Some more work was done recently on the subject by Attila Áfra. Since DACRT is so new, only a little work has been done on it so far, but it strikes me as a fertile idea and I hope more researchers will dig into it over the coming years.

Conclusion

Raytracing is very far from a solved problem; as we’ve seen, new and interesting ideas about it continue to be developed. Serious raytracing has long been a memory-intensive project, with acceleration structures and photon maps consuming large amounts of memory even for relatively simple scenes—which of course reduces how much scene data can fit into memory, and increases the need for out-of-core rendering methods. DACRT and SPPM both appear to allow large reductions in memory consumption for not very much performance cost, which is a welcome development for many reasons. It will be exciting to see how these techniques get put to use and what other advancements may be just around the corner.


5 comments on “New Ideas in Raytracing”

Anton wrote:
September 30, 2012

Hey thanks for the nice article, coincidentally we were getting excited about these recently at MM towers too. :)

Rich wrote:
October 1, 2012

A good write up of both techniques – I’ll take the time to do some further reading! Reminds me of Geometry Presorting for Implicit Object Space Partitioning (http://graphics.tu-bs.de/publications/Eisemann12IOSP/) where the whole BVH was implicit in the geometry ordering – although it was a bit slower, you could spend just a little memory on a much reduced BVH to optimize the first few levels. It was presented at EGSR this year – I’ve got some notes here: http://dickyjim.wordpress.com/2012/07/05/egsr-2012/

Simon wrote:
October 1, 2012

Knaus and Zwicker showed that the tracking of local statistics in SPPM is unnecessary, making the implementation much simpler. See their paper for more details: http://www.cgg.unibe.ch/publications/2011/progressive-photon-mapping-a-probabilistic-approach

Stephen Hill wrote:
February 16, 2013

Here’s a public link to Benjamin Mora’s DACRT paper:
http://cs.swansea.ac.uk/visualcomputing/?p=130

Nathan Reed wrote:
February 18, 2013

Thanks, Stephen! I’ve updated the article with that link.


Leave a Reply

Your email address will not be published. Required fields are marked *