Depth Precision Visualized

July 3, 2015 · Graphics, Math · 10 comments

Depth precision is a pain in the ass that every graphics programmer has to struggle with sooner or later. Many articles and papers have been written on the topic, and a variety of different depth buffer formats and setups are found across different games, engines, and devices.

Because of the way it interacts with perspective projection, GPU hardware depth mapping is a little recondite and studying the equations may not make things immediately obvious. To get an intuition for how it works, it’s helpful to draw some pictures.

Read more…


Outdoor Computing

June 18, 2015 · Misc · 1 comment

As I write this, I’m sitting on my patio with my laptop. It’s a lovely California summer afternoon, sunny with a cool breeze; I’ve got a glass of iced tea at my side, and nature all around. What could be a better environment for getting some coding done or catching up on research papers? Working outdoors is soothing and relaxing, conducive to concentration and creativity. There’s just one problem: I can hardly see the words I’m typing!

Read more…


Data-Oriented Hash Table

January 12, 2015 · Coding · 10 comments

In recent years, there’s been a lot of discussion and interest in “data-oriented design”—a programming style that emphasizes thinking about how your data is laid out in memory, how you access it and how many cache misses it’s going to incur. With memory reads taking orders of magnitude longer for cache misses than hits, the number of misses is often the key metric to optimize. It’s not just about performance-sensitive code—data structures designed without sufficient attention to memory effects may be a big contributor to the general slowness and bloatiness of software.

Read more…


Antialiasing: To Splat Or Not

November 15, 2014 · Coding, Eye Candy, Graphics · 7 comments

Note: this post is adapted from an answer I wrote for the Computer Graphics StackExchange beta, which was shut down a few months ago. A dump of all the CGSE site data can be found on Area 51.

To perform antialiasing in synthetic images (whether real-time or offline), we distribute samples over the image plane, and ensure that each pixel gets contributions from many samples with different subpixel locations. This approximates the result of applying a low-pass kernel to the underlying infinite-resolution image—ideally resulting in a finite-resolution image without objectionable artifacts like jaggies, MoirĂ© patterns, ringing, or excessive blurring.

Read more…


The Buttered-Toast Model Of Radiometry

November 2, 2014 · Eye Candy, Graphics, Math · 1 comment

It’s difficult to develop intuition for radiometric units. Radiant power, radiant intensity, irradiance, radiance—on first encountering these terms and their associated mathematical definitions, anyone’s legs would go wobbly! Building technical fluency with these concepts requires one to sit down and practice working with the math directly, and nothing can substitute for that—but reasoning can be greatly accelerated by having some good mental images that capture the essence of things. Read more…


Radiometry Versus Photometry

August 17, 2014 · Graphics, Math · No comments

Note: this post is adapted from an answer I wrote for the Computer Graphics StackExchange beta, which was shut down a couple of weeks ago. A dump of all the CGSE site data can be found on Area 51.

In computer graphics, we deal a lot with various radiometric units, such as flux, irradiance, and radiance, which quantify light in various ways. But there’s a whole other set of units for light, called photometric units, that also show up sometimes. It’s important to understand the relationship between radiometry and photometry, and when it’s appropriate to use one or the other.

Read more…


Electronic Music Box

July 11, 2014 · Coding, Eye Candy · No comments

When I was in college, I took a one-semester electronics course, and for a final project, we had a few weeks to build pretty much whatever we wanted. As a computer-science major (the only one in the class), obviously I jumped at the chance to do something with microprocessors and digital logic. I ended up building an electronic music box—a device that plays a preprogrammed song over headphones, using a PIC microcontroller.

It was an interesting journey into the (to me) strange and mysterious world of hardware and embedded software, and I got to do some fun things along the way, like etching and soldering my own circuit board. I came across the files again the other day and thought I’d share it.

Read more…


Artist-Friendly HDR With Exposure Values

June 4, 2014 · Coding, Graphics · 13 comments

Unless you’ve been living under a rock, you know that HDR and physically-based shading are practically the defining features of “next-gen” rendering. A new set of industry standard practices has emerged: RGBA16f framebuffer, GGX BRDF, filmic tonemapping, and so on. The runtime rendering/shading end of these techniques has been discussed extensively, for example in John Hable’s talk Uncharted 2: HDR Lighting and the SIGGRAPH Physically-Based Shading courses. But what I don’t see talked about so often is the other end of things: how do you acquire and author the HDR assets to render in your spiffy new physically-based engine?

Read more…


VR And Multi-GPU

April 3, 2014 · Coding, Graphics · 5 comments

The second coming of VR will be a great thing for the GPU hardware industry. With the requirements for high resolution, good AA, high framerates and low latency, successful VR applications will need substantial GPU horsepower behind them.

The PS4 GPU has 1.84 Tflop/s available[1], so—making a very rough back-of-the-envelope analysis—for a game running at 1080p / 30 Hz, it has about 30 Kflop per pixel per frame available. If we take this as the benchmark of the compute per pixel needed for “next-gen”-quality visuals, we can apply it to the Oculus DK2. There we want 75 Hz, and we probably want a resolution of at least 2460×1230 (extrapolated from the recommendation of 1820×910 for the DK1[2]—this is the rendering resolution, before resampling to the warped output). That comes to 6.8 Tflop/s. A future VR headset with a 4K display would need 27 Tflop/s—and if you go up to 90 Hz, make it 32 Tflop/s!

The NVIDIA Titan Black offers 5.1 Tflop/s[3] and the AMD R9 290X offers 5.6 Tflop/s[4]. These are the biggest single GPUs on the market at present, but they appear to fall short of the compute power needed for “next-gen”-quality graphics on the DK2. This is very rough analysis, but there’s clearly reason to be interested in multi-GPU rendering for VR.

Read more…


Deferred Texturing

March 25, 2014 · Coding, Graphics · 6 comments

There’s an idea that’s been bouncing around in my head for awhile about a kind of deferred renderer that (as far as I know) hasn’t been done before. I’m sure I’m not the first to have this idea, but recently it crystallized in my head a bit more and I wanted to write a bit about it.

For the last several years, graphics folks have been experimenting with a variety of deferred approaches to rendering—deferred shading, light-prepass, tiled deferred, tiled forward, and so on. These techniques improve both the cleanliness and performance of a renderer by doing some or all of the following:

  • Elegantly separate lighting code from “material” code (that is, code that controls the properties of the BRDF at each point on a surface), avoiding shader combination explosion
  • Only process the scene geometry once per frame (excluding shadow maps and such)
  • Reduce unnecessary shading work for pixels that will end up occluded later
  • Reduce memory bandwidth costs incurred in reading, modifying and writing screen-sized buffers

Different rendering methods offer different tradeoffs among these goals. For example, light-prepass renderers reduce the memory bandwidth relative to deferred shading, at the cost of requiring an extra geometry pass.

Deferred texturing is another possible point in the deferred constellation—which as far as I know has not been implemented, though it’s been discussed. The idea here, as the name suggests, is to defer sampling textures and doing material calculations until you know what’s on screen.

Read more…