# Radiometry Versus Photometry

August 17, 2014 · Theory · No comments

Note: this post is adapted from an answer I wrote for the Computer Graphics StackExchange beta, which was shut down a couple of weeks ago. A dump of all the CGSE site data can be found on Area 51.

In computer graphics, we deal a lot with various radiometric units, such as flux, irradiance, and radiance, which quantify light in various ways. But there’s a whole other set of units for light, called photometric units, that also show up sometimes. It’s important to understand the relationship between radiometry and photometry, and when it’s appropriate to use one or the other.

# Electronic Music Box

July 11, 2014 · Eye Candy, Practical · No comments

When I was in college, I took a one-semester electronics course, and for a final project, we had a few weeks to build pretty much whatever we wanted. As a computer-science major (the only one in the class), obviously I jumped at the chance to do something with microprocessors and digital logic. I ended up building an electronic music box—a device that plays a preprogrammed song over headphones, using a PIC microcontroller.

It was an interesting journey into the (to me) strange and mysterious world of hardware and embedded software, and I got to do some fun things along the way, like etching and soldering my own circuit board. I came across the files again the other day and thought I’d share it.

# Artist-Friendly HDR With Exposure Values

June 4, 2014 · GPU, Practical · 10 comments

Unless you’ve been living under a rock, you know that HDR and physically-based shading are practically the defining features of “next-gen” rendering. A new set of industry standard practices has emerged: RGBA16f framebuffer, GGX BRDF, filmic tonemapping, and so on. The runtime rendering/shading end of these techniques has been discussed extensively, for example in John Hable’s talk Uncharted 2: HDR Lighting and the SIGGRAPH Physically-Based Shading courses. But what I don’t see talked about so often is the other end of things: how do you acquire and author the HDR assets to render in your spiffy new physically-based engine?

# VR And Multi-GPU

April 3, 2014 · GPU, Practical · 5 comments

The second coming of VR will be a great thing for the GPU hardware industry. With the requirements for high resolution, good AA, high framerates and low latency, successful VR applications will need substantial GPU horsepower behind them.

The PS4 GPU has 1.84 Tflop/s available[1], so—making a very rough back-of-the-envelope analysis—for a game running at 1080p / 30 Hz, it has about 30 Kflop per pixel per frame available. If we take this as the benchmark of the compute per pixel needed for “next-gen”-quality visuals, we can apply it to the Oculus DK2. There we want 75 Hz, and we probably want a resolution of at least 2460×1230 (extrapolated from the recommendation of 1820×910 for the DK1[2]—this is the rendering resolution, before resampling to the warped output). That comes to 6.8 Tflop/s. A future VR headset with a 4K display would need 27 Tflop/s—and if you go up to 90 Hz, make it 32 Tflop/s!

The NVIDIA Titan Black offers 5.1 Tflop/s[3] and the AMD R9 290X offers 5.6 Tflop/s[4]. These are the biggest single GPUs on the market at present, but they appear to fall short of the compute power needed for “next-gen”-quality graphics on the DK2. This is very rough analysis, but there’s clearly reason to be interested in multi-GPU rendering for VR.

# Deferred Texturing

March 25, 2014 · GPU, Practical · 6 comments

There’s an idea that’s been bouncing around in my head for awhile about a kind of deferred renderer that (as far as I know) hasn’t been done before. I’m sure I’m not the first to have this idea, but recently it crystallized in my head a bit more and I wanted to write a bit about it.

For the last several years, graphics folks have been experimenting with a variety of deferred approaches to rendering—deferred shading, light-prepass, tiled deferred, tiled forward, and so on. These techniques improve both the cleanliness and performance of a renderer by doing some or all of the following:

• Elegantly separate lighting code from “material” code (that is, code that controls the properties of the BRDF at each point on a surface), avoiding shader combination explosion
• Only process the scene geometry once per frame (excluding shadow maps and such)
• Reduce unnecessary shading work for pixels that will end up occluded later
• Reduce memory bandwidth costs incurred in reading, modifying and writing screen-sized buffers

Different rendering methods offer different tradeoffs among these goals. For example, light-prepass renderers reduce the memory bandwidth relative to deferred shading, at the cost of requiring an extra geometry pass.

Deferred texturing is another possible point in the deferred constellation—which as far as I know has not been implemented, though it’s been discussed. The idea here, as the name suggests, is to defer sampling textures and doing material calculations until you know what’s on screen.

# Why Do Quaternions Double-Cover?

January 24, 2014 · Theory · 7 comments

Quaternions are pretty well-known to graphics programmers at this point, but even though we can use them for rotating our objects and cameras, they’re still pretty mysterious. Why are they 4D instead of 3D? Why do the formulas for constructing them have factors like $$\cos\theta/2$$ instead of just $$\cos\theta$$? And why do they have the “double-cover” property, where there are two different quaternions (negatives of each other) that represent the same 3D rotation?

These properties need not be inexplicable teachings from on high—and you don’t have to go get a degree in group theory to comprehend them, either. There are actually quite understandable reasons for quaternions to be so weird, and I’m going to have a go at explaining them.

# On Vector Math Libraries

December 28, 2013 · Practical · 7 comments

Image from Wikipedia—licensed CC-BY-SA

For graphics coders, vector and matrix math libraries are something we use nearly every day, and in just about every function we write. If you’ve been working in this field for long, you’ve probably used a dozen different vector libs in C++, not to mention the first-class vectors and matrices in HLSL and GLSL—and you’ve probably written your own vector lib, at least twice! It seems like every project and every coder has their own special flavor of this basic utility—nearly identical with, yet always slightly different from, all the other versions out there.

Unfortunately, despite the ubiquity of these libs and the fact that every seasoned graphics coder probably knows their vector lib better than their own spouse, still most of the vector math libs I’ve seen are pretty deficient. Sometimes these deficiencies are obvious, sometimes rather subtle, but vectors and matrices are used so often and so deeply that it’s clearly worth investing time to get them right.

# How Is the NDF Really Defined?

July 31, 2013 · Theory · 7 comments

One of the key ingredients of a microfacet BRDF is its normal distribution function (NDF), which statistically describes the microscopic shape of the surface as a distribution of microfacet orientations. For an NDF to be physically sensible it should satisfy certain conditions, such as normalization. But the correct way to normalize an NDF isn’t what you might intuitively think, and I’d like to explain why.

# C++ Compile-Time Array Size

July 10, 2013 · Practical · No comments

A handy utility to add to your C++ codebase is a way to get the count of elements in an array whose size is known at compile time. This is useful for things like iterating over a static array, or ensuring that the size of an array matches another array or an enum. Most C++ coders have probably seen something like this before, but as it’s not entirely trivial to get right, I thought it was worth a quick blog post.

# Visual Studio Debug Visualizer For Half Floats

February 6, 2013 · Practical · No comments

Here’s something handy I managed to put together the other night: a Visual Studio autoexp.dat debugger visualizer for half-precision floats. It’s written against the half type from the OpenEXR library, but if you have a custom half type it should be easy to adapt to that. It handles infinities, NaNs, and denormals.