Quadrilateral Interpolation, Part 1

May 26, 2012 · GPU, Practical · 24 comments

In computer graphics we build models out of triangles, and we interpolate texture coordinates (and other vertex attributes) across surfaces using a method appropriate for triangles: linear interpolation, which allows each triangle in 3D model space to be mapped to an arbitrary triangle in texture space. While linear interpolation works well most of the time, there are situations in which it doesn’t suit our needs—for example, when mapping a square texture to a quadrilateral: using linear interpolation on each of the quad’s two triangles produces an ugly seam. In this article series, I’m going to talk about interpolation methods that allow arbitrary convex quadrilaterals to be texture-mapped without a seam along the diagonal.

First of all, what’s the problem with quads and the usual linear UV interpolation? Let me illustrate, with the help of this brick from CgTextures:

Linear interpolation allows for arbitrary affine transforms to be applied to a texture image. This includes any combination of translation, scaling, rotation, and shearing:

As you can see, these transforms work perfectly well on a quad; you can’t see the seam between the two triangles. However, if I move one of the quad’s verts so that it’s no longer a parallelogram, you can see the seam:

This happened because the triangles are still congruent in UV space (each covers half the texture, as before), but they are no longer congruent in model space. The affine transforms for the two triangles are no longer equal; although the UV mapping is still continuous along the seam, its derivatives (the tangent and bitangent vectors) are discontinuous there, resulting in ugliness.

Ordinarily, when building irregularly-shaped geometry like this, you wouldn’t assign UVs this way. For example, a level designer creating an irregularly-shaped wall piece would apply a single UV projection to the whole wall, giving all the triangles the same affine transform. This implies that not all of the texture is seen: it’s clipped and cropped so the shape of the mesh in UV space matches its shape in model space, preventing seams.

But what if we really do want to get a texture onto an arbitrary (convex) quadrilateral, without cropping out part of it?

Affine transforms allow arbitrary triangle-to-triangle mappings: you can create an affine mapping between any two triangles, no matter how different their shapes. This is just what happens when you apply a texture to a triangle: by setting up UVs, you implicitly create an affine map between model space and UV space. When rendering, the rasterizer evaluates this mapping to find the appropriate texture sample point for each pixel.

Geometrically, as long as the quad remains a parallelogram, the affine transforms for its two triangles are equal and you can’t see the seam. But when the quad isn’t a parallelogram, affine transforms and linear interpolation cannot smoothly map the whole texture to the quad.

To solve this problem, we must leave the world of linear interpolation and affine transforms behind! There are more-sophisticated interpolation methods that can help here, each with its own pros and cons. In this article I’m going to talk about one in particular, called projective interpolation. Later articles in this series will cover alternative methods.

Projective Interpolation

Just as linear interpolation is based on affine transforms, projective interpolation is based on the family of projective transforms. These transforms are very familiar in 3D graphics: they’re exactly the same ones used to map a 3D scene onto a 2D image, simulating perspective! But how can this help us with interpolation?

The intuition is that if you have a 3D scene consisting of a single quad, as you move the camera around and look at it from different positions, its projected shape on the 2D screen will be, in general, a different quad. In fact, it turns out you can map any convex quad to any other convex quad this way, by finding an appropriate camera setup.

Moreover, we know how to interpolate UVs in such a way that a 3D quad doesn’t show a seam when it’s projected to the 2D screen; such perspective-correct interpolation is done all the time. This suggests that we should be able to texture-map a quad without a seam by using the same math used for perspective-correct interpolation. And indeed this works:

The entire texture is now warped to the irregular shape of the quad, with no visible seam!

However, this image is a little odd: it doesn’t really look like a 2D quad anymore. It actually looks a lot like a wall in a 3D engine, with the camera turned to the side so that the wall recedes into the distance. That’s the nature of projective interpolation. Because it uses the same math that’s involved in 3D-to-2D perspective, this method gives results that tend to look like a 3D scene, even though the quad is completely 2D.

With that caveat in mind, here’s how you implement projective interpolation.

It’s well-known that to do perspective-correct interpolation for a triangle, you must calculate u/z, v/z, and 1/z at each vertex, interpolate those linearly in screen space, then calculate u = (u/z) / (1/z) and v = (v/z) / (1/z) at each pixel. GPU rasterizers do this automatically, behind the scenes, for every interpolated attribute. We use the same idea here: our vertex shader will output uq, vq, and q, the GPU will interpolate those quantities linearly in model space, then we’ll divide by ‘q’ at each pixel. Here, ‘q’ is a per-vertex value that plays the role of 1/z. However, this ‘q’ will be determined by the shape of the quadrilateral. It’s a made-up “depth” chosen to give the right projective transform to eliminate the seam.

The vertex shader and pixel shader for projective interpolation will look something like this:

float4x4 g_matLocalToClip;
Texture2D g_texColor;
SamplerState g_ss;
struct VertexData
    float3 pos : POSITION;
    float3 uvq : TEXCOORD0;
void Vs (
    VertexData vtx,
    out float3 uvq : TEXCOORD0,
    out float4 posClip : SV_Position)
    posClip = mul(float4(vtx.pos, 1.0), g_matLocalToClip);
    uvq = vtx.uvq;
void Ps (
    float3 uvq : TEXCOORD0,
    out half4 o_rgba : SV_Target)
    o_rgba = g_texColor.Sample(g_ss, uvq.xy / uvq.z);

Here, the important parts are: (a) the UVs are float3 instead of the usual float2, with ‘q’ in the third component; and (b) the pixel shader divides by ‘q’ before sampling the texture. The uvq values are precomputed and stored in the vertex data, so the vertex shader just passes them through.

The real trick here is how to calculate the right ‘q’ value for each vertex of the quad. This is fairly subtle—at least, it took me awhile to work it out!—and I’ll spare you the derivation. To find the ‘q’s, first find the intersection point of the two diagonals of the quad (e.g., intersect one diagonal with the plane defined by the other diagonal and the quad’s normal vector), and calculate the distances from this point to each of the four vertices. I’ll call those distances d0…d3:

Then, each ‘q’ is computed using the ‘d’s for that vertex and the opposite one, as follows: \[ uvq_i = \mathtt{float3}(u_i, v_i, 1) \times \frac{d_i + d_{i+2}}{d_{i+2}} \qquad (i = 0 \ldots 3) \] Store those values in your vertex data, and you’ll have projective interpolation!

Projective interpolation does the job we set out to do—it maps a texture smoothly onto an arbitrary convex quad. However, there are some potentially-problematic oddities with this method. As we saw above, it can generate results that 3D even when they’re not supposed to. This is related to how projective interpolation alters the spacing of points along a line nonuniformly, as can be seen by applying the interpolation to a grid:

The vertical grid lines, which are evenly spaced in the texture, are no longer evenly spaced after interpolation; they’re closer together at one end of the quad and farther apart at the other. Again, this is a consequence of “perspective” scaling things down when they’re “farther” from the camera. Unfortunately, this nonuniform spacing is completely dependent on the shape of the quad, and won’t generally match when two quads share an edge:

This is a lot like the original problem we were trying to solve: two adjoining triangles with different shapes would have different affine transforms, producing a seam. Here, two adjoining quads with different shapes have different projective transforms, producing a seam. If you’re trying to use this in a situation where you have multiple quads that need to join smoothly, this problem is pretty much a deal-breaker for projective interpolation.

In future installments of this series, I’ll talk about alternatives to projective interpolation that can also smoothly map a texture onto a quad, but with different features and caveats.

24 comments on “Quadrilateral Interpolation, Part 1”

Peter Sikachev wrote:
May 29, 2012

Thanks for the nice idea, Nathan!
I’m trying to use quadrilateral interpolation for trail rendering but, obviously, this approach would fail as one will notice clearly C0-discontinuities along the quads edges instead of C1-discontinuities along the quads edges and diagonals. Which parameterization will solve this issue?

Nathan Reed wrote:
May 30, 2012

Peter, for trail rendering I think bilinear interpolation may work better. I’m planning to cover that in the next article in this series.

Peter Sikachev wrote:
May 31, 2012

Unfortunately, it is not enough. If a charachter sabers and you have 20 cm tesselated polygons – you’ll see zig-zags very clearly.
Thus, I am super-interested in continuation.

Nathan Reed wrote:
June 1, 2012

Hmm, perhaps what you really want is some sort of spline, like a quadratic Bezier that would let you set matching tangents at each edge? It’s an interesting problem! IIRC, in Infamous 2 our trail rendering just tessellated very finely – like 10 polygons per frame or something ridiculous like that. :) It’s easy, but it would be nice to figure out how to do that in the shader instead of by adding geometry!

Peter Sikachev wrote:
June 2, 2012

I don’t think that a Bezier spline would solve the problem, as in points where it does not match the edge of the polygon you won’t map value 1 to the edge, as it is mapped to the spline (if I understood what you meant).
Of course, overtesselation solved the problem, but obviously we want to keep polycount fixed :)

araon wrote:
December 2, 2012

Can’t one just use the keyword “noperspective” in the shader?

Nathan Reed wrote:
December 2, 2012

araon, no, that does somewhat the opposite of what I’m trying to do. It causes attributes on 3D geometry to be interpolated as if it were 2D, smashed flat to the screen. Here, I’m trying to interpolate UVs on a 2D quad and in this article I used a method that makes it appear 3D.

araon wrote:
December 2, 2012

I see, have overseen that your quad is pure 2d. Now it makes sense. Sorry for the stupid post. a.

Fei Yang wrote:
April 16, 2013

I’ve just noticed that your Quadrilateral Interpolation can be implemented without the shader! Just use “glTexCoord4f” function and pass the coordinates as glTexCoord4f(u,v,0,q), which works even on traditional graphics hardware!

Moritz wrote:
May 2, 2013

Fei Yang, nice you tried it with OpenGL. For me this was obvious, since the shader did exactly what homogenous coordinates are handled – thats why there is a glTexCoord4f in OpenGL :) I already had tougths weather there could be a Z in glTexCoord4f(u,v,Z,q) as a correction for the ‘perspective distortion’ within the 2D-quads… but I did not try it without Shaders.

recond wrote:
May 8, 2013

Nathan,the article is great. It helps.I am really interested in the derivation of your formula, would you like to tell me how you get the method?

recond wrote:
May 9, 2013

Hi,Nathan, I have implemented your method in OpenGL, but the result is incorrect.is the formula applicable to opengl?

Nathan Reed wrote:
May 9, 2013

Hi recond, I’ll try to write up something on the derivation of the formula but it may take me a few days as I’m quite busy at the moment. The basic idea is to reduce it to a one-dimensional problem. You can imagine pivoting the quad about its diagonal in 3D homogeneous space; that lets you set the q-values for two opposite corners while leaving the other two corners fixed. Do this for both diagonals, and the only fixed point is the point where they meet.

As for OpenGL, as far as I know it should be perfectly applicable. As Fei Yang pointed out, OpenGL even supports this in the fixed-function mode, using glTexCoord4f.

quas wrote:
October 7, 2013

nice idea how to calculate ‘q’. I’m looking for non-perspective solution, did you try to figure out formula for ‘q’ in this case? Maybe i have to use ‘r’ and use it in shader like this: o_rgba = g_texColor.Sample(g_ss, float2(uvrq.x / uvrq.r, uvrq.y / uvrq.q)); but i’m still failing. Any help please?

Nathan Reed wrote:
October 7, 2013

Hi quas, I’m not sure what you mean by “non-perspective solution”; can you clarify? But using different divisors for U and V is an interesting idea; I’m not sure what that would look like.

quas wrote:
October 7, 2013

by non-perspective i mean something like this: http://i.stack.imgur.com/V2KCQ.jpg , but this trapeziod i can solve, the problem starts when i’d like to texturing general quadrilateral (convex) where simply texture is uniformly distributed along each edge. hope i describe my problem better (english is not my native lang.)

Nathan Reed wrote:
October 7, 2013

Ahh, I see. Yes, that’s what bilinear interpolation does (different thing from bilinear texture filtering). I’m not sure how to implement bilinear UV interpolation in a shader, but it would be interesting to figure out. Paul Heckbert’s master’s thesis has some more about this, although not in a shader-ready form.

quas wrote:
October 7, 2013

yes, actually own bilinear uv interpolator is good idea which i didn’t come up, i’ll try. thanks for given direction and pdf link.

fakenerd wrote:
July 21, 2014

Why is q calculated like that?

Bruno Azzinnari wrote:
August 3, 2014


This is just what I’ve been looking for, but I’m having a hard time trying to figure out ui and vi given the world coordinates for my point (px, py) and the 4 corners. Any leads for that would be much appreciated!

Nathan Reed wrote:
August 4, 2014

Bruno, they’re just the usual UVs that you always use for texture mapping. If you want to map the whole texture to the quad, they’d be (0, 0), (1, 0), (1, 1) and (0, 1). You can of course map some other region as well.

fakenerd, q is calculated like that because that’s what’s necessary to produce the desired effect. :) The derivation is just a bunch of nasty algebra, but it can be reduced to a 1D problem along each diagonal, which is why the solution takes the form it does.

Bruno Azzinnari wrote:
August 5, 2014


I was looking to do the interpolation myself for some reason, but that’s something the shader gives already :) I got it working now, thanks a bunch!

Jurgis Armanavichius wrote:
November 8, 2014

Hello, Nathan!

Thank you very, very much for perfect idea! This calculation works exactly as is needed for me. Like for quas, for me is necessary to interpolate using constant vertical step. I did this very simple: remove the u coord from calculation :-)

2 quas,

Solution for constant vertical step is very simple: do not prepare the u texture coord! Just use the following calculations in main application:

vq = vec2(vi,1) * ((di+di2) / di2);
uvq = vec3(u, vq.x, vq.y);

And pass the u coord from vertex shader to fragment shader unchanged (varying vec3 TexCoord;). In the fragment shader use the following code:

vec2 uv;
uv.x = TexCoord.x;
uv.y = TexCoord.y/TexCoord.z;
gl_FragColor = texture2D(BaseTexture, uv);

Here are my results:

Input texture:
Trapezoid texturing without corrections:
Perspective correction by Nathan Reed:
Perspective correction with constant vertical step:

Rustam Ahtiamov wrote:
November 25, 2014

So … where is the part 2? Fans are still waiting! :)

Leave a Reply

Your email address will not be published. Required fields are marked *