One difficulty with GPU hardware tessellation is the complexity of programming it. Tessellation offers a number of modes and options; it’s hard to remember which things do what, and how all the pieces fit together. I use tessellation just infrequently enough that I’ve always completely forgotten this stuff since the last time I used it, and I’m getting sick of looking it up and/or figuring it out by trial and error every time. So here’s a quick-reference post for how it all works!
Welcome back, readers! You may have noticed that the site looks a bit different now. Over the last few weeks I’ve redesigned the theme, making it more modern and mobile-friendly, and also converted it from Wordpress to a static site generator, which should make it faster in general as well as hopefully more resilient to the occasional slashdotting. 😅
I ended up building my own little static site generator in Python, and I’ve put it up on GitHub in case it’s helpful as a starting point for anyone else’s efforts.
In June 2016, game developer Sophie Houlden held a month-long game jam inspired by Star Trek. Although my initial plan was to actually make a game, after one thing and another I ended up radically de-scoping and I decided instead to re-arrange the Next Generation theme music, as an exercise in orchestral writing. Working from a piano score and my nostalgia for the original, I turned out this take on the classic.
The show’s original version (one of them—there are a few slightly different variants used in different seasons) can be found on YouTube here.
Like many people, my first foray into game development was modding. In the early 2000s I spent a lot of time making maps for Doom, and later Half-Life. But I hadn’t touched it for about ten years, until this winter, when Eevee posted a series of blog articles on Doom mapping, and I was inspired to take up the editor again. This map was the result.
I spent about a month on this (my initial plan turned out to take a lot longer to execute than I thought—big surprise), and I’m pretty happy with the result. It was neat to come back to Doom after this time and see how my perspective had changed. The tools available today are a lot better than what I remember, and I’m way smarter about level design than I was ten years ago. Still, by the end of making this, I was starting to get frustrated with Doom’s limitations, and I’m definitely all mapped out for awhile.
I’ve packaged up the map with a copy of the ZDoom engine and the Freedoom asset pack (since the original Doom textures, sprites, sounds, etc. are all under copyright and can’t be redistributed). If you have a copy of Doom 2, drop your doom2.wad file in the directory and use that; otherwise, you can play it with the Freedoom assets.
(both include speaker notes)
GameWorks VR is a suite of technologies I helped to build at NVIDIA in 2015–2016. It’s an SDK for VR game, engine, and headset developers, aimed at cutting down graphics latency and accelerating stereo rendering on NVIDIA GPUs. In this talk, I explain the features of this SDK, including VR SLI, multi-resolution rendering, context priorities and direct mode.
Depth precision is a pain in the ass that every graphics programmer has to struggle with sooner or later. Many articles and papers have been written on the topic, and a variety of different depth buffer formats and setups are found across different games, engines, and devices.
Because of the way it interacts with perspective projection, GPU hardware depth mapping is a little recondite and studying the equations may not make things immediately obvious. To get an intuition for how it works, it’s helpful to draw some pictures.
As I write this, I’m sitting on my patio with my laptop. It’s a lovely California summer afternoon, sunny with a cool breeze; I’ve got a glass of iced tea at my side, and nature all around. What could be a better environment for getting some coding done or catching up on research papers? Working outdoors is soothing and relaxing, conducive to concentration and creativity. There’s just one problem: I can hardly see the words I’m typing!
In recent years, there’s been a lot of discussion and interest in “data-oriented design”—a programming style that emphasizes thinking about how your data is laid out in memory, how you access it and how many cache misses it’s going to incur. With memory reads taking orders of magnitude longer for cache misses than hits, the number of misses is often the key metric to optimize. It’s not just about performance-sensitive code—data structures designed without sufficient attention to memory effects may be a big contributor to the general slowness and bloatiness of software.
Note: this post is adapted from an answer I wrote for the Computer Graphics StackExchange beta, which was shut down a few months ago. A dump of all the CGSE site data can be found on Area 51.
To perform antialiasing in synthetic images (whether real-time or offline), we distribute samples over the image plane, and ensure that each pixel gets contributions from many samples with different subpixel locations. This approximates the result of applying a low-pass kernel to the underlying infinite-resolution image—ideally resulting in a finite-resolution image without objectionable artifacts like jaggies, Moiré patterns, ringing, or excessive blurring.
It’s difficult to develop intuition for radiometric units. Radiant power, radiant intensity, irradiance, radiance—on first encountering these terms and their associated mathematical definitions, anyone’s legs would go wobbly! Building technical fluency with these concepts requires one to sit down and practice working with the math directly, and nothing can substitute for that—but reasoning can be greatly accelerated by having some good mental images that capture the essence of things.