Stanley is a project I developed as part of my CS155 Graphics course at Pomona College. It performs ray tracing and Monte Carlo path tracing. Stanley consists of about 6,400 lines of C++ code, compiles and runs in either Windows or Linux, and uses Lua as a scene description language. While Stanley is quite incomplete from a feature standpoint, it contains most of the fundamental components of any physically-based rendering system.
While I was developing Stanley, the desire to simplify the interface between C++ and Lua led me to also develop Luabridge, a lightweight C++/Lua binding library based on C++ template metaprogramming. I spun this library off into a separate open source project on Sourceforge, in the hopes that some other people would find it useful.
Below are some images rendered with Stanley. Click on any of the images to see a larger version. (Note: the path-traced images appear noisy because computation time was limited, and because Stanley employs relatively basic path tracing methods that produce more variance in the radiance samples than a more sophisticated system would.)
- Stanley precompiled Win32 binary and assorted scene scripts
- Stanley source code
- Geometry library (used by Stanley)
- Luabridge download page (used by Stanley)
This is another project from my college graphics course. It is an interactive OpenGL app demonstrating several rendering and game-related techniques. Some of the highlights include:
- Stencil projective shadows.
- Stencil reflections on the floor.
- Basic driving physics for the robot. (It is a lot like the code you would write to drive a car, jeep, tank or other vehicle.)
- A fully articulated robot arm (demonstrates hierarchical modeling).
- Portals. You can fly the camera through and drive the robot through. Rendering of the robot is a little buggy when it's halfway through the portal.
- A Catmull-Rom roller coaster on which the camera can ride.
Here are some images from the demo:
These OpenGL demos are part of an ongoing process to develop my programming skills and knowledge of 3D graphics. Each demo exhibits a single graphics technique. In my free time (hah!) I'm working on putting together some of these techniques to form something a little more interesting and impressive.
Most of the demos use my personal texture and model file formats, RTX and RMD. The source code includes my library for reading and writing these files, called glextlib.
These demos were developed on my old Radeon 9600. Most of them will probably run on newer cards, but I make no guarantees.
In this program I've abandoned GLSL shaders to try out nVidia Cg. I'm very happy with Cg; the language is nice and the compiler seems to work very well, and the application-side API is very easy to work with. In this demo I've written several Cg shaders to perform various postprocessing effects on the rendered image. You can switch between the filters by pressing F1, F2, and so on. There are Gaussian filters of various sizes, a bloom filter, and a time-varying sinusoidal distortion filter just for fun.
This simple program demonstrates using OpenGL to fade out the Windows desktop before starting the main part of a game. It was originally posted as a COTD on Devmaster. The effect provides a nice way to hide the messy details of graphics initialization, like changing video modes and creating the main window. It runs on all video cards, as far as I know, and is reasonably simple to integrate into an existing program. It probably won't work in Windows Vista, though I haven't tried.
This demo expands on shadowmap, but adds a couple of new twists. First, there are two independent lights, each with its own shadowmap. Second, the lights are omnidirectional. Doing omnidirectional shadow mapping is a bit of a challenge. There is no such thing as a depth cubemap in OpenGL, because when reading from a depth texture, the third texture coordinate is used to specify the depth value with which to compare the texture value. It is possible to use floating-point pbuffers or other tricks to encode depth information into a cubemap; but this requires the fragment shader perform the comparison explicitly, and negates the benefits of built-in PCF on GeForce 6-class hardware. Therefore, I chose to use an unrolled cubemap, which is a regular 2D texture 4 times wider and 2 times higher than the cubemap would have been; the faces are then rendered into tiles on this texture. A true cubemap is then used to lookup the texture coordinates into this unrolled cubemap, as well as encoding information about which world axis to transform to the light space Z axis in the fragment shader. As a bonus, CLAMP_TO_EDGE mode on the lookup cubemap neatly clips the unrolled texture coordinates so that no accidental cross-face filtering takes place.
One of the problems with traditional lighting models is that all light values fall within a fixed range, like 0 to 1. This is a bit unrealistic, since in reality our eyes can percieve a much higher dynamic range. This program demonstrates a floating-point pbuffer used to render the scene in HDR (high dynamic range), using a physically based lighting model. The metal and glass spheres also use a physically based Fresnel term to modulate the reflection and refraction. The HDR image is displayed on the screen with a tone mapping function.
Unfortunately, since mip-mapping is not available for floating-point textures, the reflected images in the spheres appear point-sampled. Also, this demo appears not to work right on nVidia cards; the cube maps for the spheres come out as garbage.
I've always been interested in fractals and have written several Mandelbrot fractal renderers. This one computes the fractal on the GPU, using floating-point pbuffers and three GLSL shaders. Left-click to zoom in and right-click to increase the number of iterations. The fractal is rendered one iteration at a time, so it appears to animate as it's being drawn. Unfortunately, due to the limitations of the 24-bit float format used internally by ATI's fragment pipeline, you can only zoom in 4 times before precision breaks down. You can get a bit farther on an nVidia card, but single-precision floating point still isn't good enough for any serious fractal viewing. I co-authored an article about GPU Mandelbrot rendering on Ozone3D.
This demo demonstrates shadow mapping. Radeons don't support rendering directly to depth textures, so the 512×512 shadow map is rendered to a pbuffer and then copied to a texture. The light is omnidirectional but only one shadow map is used; this can be done since the table is the only shadow caster in the scene. Light space projection, clipping, and 4-sample PCF (percentage-closer filtering) is done by the fragment shader. In a real application of shadow mapping, one would have to use a spotlight, or one could use a cubemap for omnidirectional lights (as in the Dual Shadowmap demo).
This demo's name comes from the combination of "reflection" and "refraction". It builds upon the shaders demo, using render-to-texture to render a dynamic environment cubemap, which is used by the reflection and refraction shaders applied to the ball. The ball looks like polished metal or glass, depending on which shader you use (press Space to switch between them).
The first of several OpenGL lighting and shading demos. This one demonstrates per-pixel diffuse and specular lighting with bump mapping, multiple lights, attentuation, and simple parallax mapping. Vertex and fragment shaders were written in GLSL.
This program was developed in 2003–2004 for my high school senior project and IB extended essay. It's a study of frustum culling and occlusion culling, and their efficacy in speeding up rendering of a landscape (the same one used in fovdemo, above). The landscape is partitioned into a quadtree for recursive culling, hence the name. For those interested, more details can be found in the essay.
A quickly thrown-together program, developed in response to a posting at Flipcode back in the days when it was still active. Lets you fly around a landscape while changing the camera's field of view, as well as the aspect ratio of the viewport (from widescreen to full-frame). If you zoom the camera out while flying forward, or zoom in while flying backward, you can see a nice Hitchcock effect.
I've always been interested in general relativity and the idea that gravity is the result of the curvature of space-time. This program lets you place gravitational sources of varying sizes on a 2D surface and view the "curvature" that results. (This is not to be taken as an actual visualization of general relativity! For those interested, what you're actually seeing is the scalar potential of the Newtonian gravitational field.)