Introduction
In your average voxel game (that includes YOUR minecraft clone too), you usually filter textures with nearest filtering. This is good for a nice, crisp pixelated look but it has one big problem: oblique angles.
When you look at a flat surface (like the ground) at a steep angle, nearest filtering makes the texture look like complete shit.
To solve this problem, mipmapping was invented to reduce the aliasing resulting from this - it will sample the texels in a coherent way.
However, it also blurs everything to death (what more can you expect from a small number of pixels?) so anisotropic filtering was invented to restore the image quality by sampling a non-square area of the image.
Great, and when you implement that in your voxel engine, the textures start bleeding harder than your mum three weeks ago. Guess why?
Yeah, that’s because hardware anisotropic filtering (AF) doesn’t work with texture atlases.
Sadly, hardware AF assumes your texture wraps nicely at the edges. It’ll happily sample beyond [0,1] UV coordinates and wrap around. When you’re using a texture atlas, it will do everything but that. This is completely fucked for texture atlases.
When you have different block textures packed into one atlas (let’s say, 256×256 with 16×16 textures, each texture being 16×16 pixels), hardware AF will happily bleed the border pixels over into the adjacent textures and make everything look like shit. You get these edge artifacts where your stone texture starts sampling from dirt, grass or whatever the fuck is next to it in the atlas.
The “solution” most people use? Just disable AF and live with blurry oblique textures. Good enough, but they are also cowards. We can do better.™
The Solution
For the Subway Surfers attention span zoomers amongst you, here’s some images before the yapping:
Before:

After:

Turns out, for nearest filtering…. you can just implement it yourself in the shader, and the nice part is, all of this works without blowing your GPU up!
How It Works
The core idea behind anisotropic filtering is that when you look at a texture at an oblique angle, you get an ellipse footprint in texture space instead of a nice square. Hardware AF would normally:
- Figure out the major and minor axes of this ellipse
- Take multiple samples along the major axis
- Average them together
We need to do the same thing, but respect the atlas boundaries. With one very important difference: we can just be lazy fucks because we don’t need to sample all those samples which would be normally required when doing big boy texture filtering. Our ellipse is literally a straight fucking line along the major axis, because we’re using nearest filtering. So we can just sample along that line, and mirror+clamp the UVs to stay within the subtexture boundaries. EZ PZ.
Step 1: calculate the Jacobian
mat2 J = inverse(mat2(dFdx(uv), dFdy(uv)));
dFdx and dFdy give us how UV coordinates change per screen pixel. We invert this to get how screen space changes per UV coordinate, then compute J^T * J to get the metric tensor (fancy words for “the thing that tells us how stretched our texture sampling is”).
Step 2: extract eigenvalues
float d = det(J);
float t = J[0][0]+J[1][1];
float D = sqrt(abs(t*t-4.001*d));
float V = (t-D)/2.0; // major axis
float v = (t+D)/2.0; // minor axis
This is just the eigenvalue formula for a 2×2 matrix, solved analytically. (read: I got it from the internet lol) The major and minor eigenvalues tell us how much the texture is stretched in each direction.
Note the 4.001*d instead of 4*d - that’s a numerical stability hack to prevent the square root from going negative due to floating-point fuckery. I didn’t think about it too deeply, I’m not even sure if it’s needed but hey, it doesn’t hurt!
Step 3: calculate the major axis direction
float M = 1.0/sqrt(V);
vec2 A = M * normalize(vec2(-J[0][1], J[0][0]-V));
The eigenvector corresponding to the major eigenvalue gives us the direction we need to sample along.
Step 4: calculate sample count
float anisotropy = max(M/m, 1.0);
float sampleCount = min(ANISO_LEVEL, ceil(anisotropy));
We calculate the anisotropy ratio (how elliptical our footprint is) and calculate the number of samples. This might be divergent but it’s only a few samples and it’s coherent for nearby pixels so it’s fine.
Step 5: sample along the major axis
for (float i = -samplesHalf + 0.5; i < samplesHalf; i++) {
vec2 sampleUV = uv + ADivSamples * i;
sampleUV = clamp(mirror(sampleUV, subtexMin, subtexMax), subtexMinClamped, subtexMaxClamped);
vec4 colorSample = textureLod(texSampler, sampleUV, lod);
c.rgb += colorSample.rgb * colorSample.a;
c.a += colorSample.a;
}
(I chose to pre-multiply my alpha, but it literally doesn’t matter. Do whatever you want, idc.)
So the trick is: mirroring + clamping.
Each subtexture in the atlas has boundaries. When sampling would go outside those boundaries, we:
- Mirror the UV coordinates back into the subtexture (like wrapping, but reflected)
- Clamp to a half-texel margin to prevent any bleeding (I wasn’t smart enough to figure out how to exactly stay within bounds, cheapo fix!)
The mirror() function handles wrapping within a subtexture:
vec2 mirror(vec2 uv, vec2 minBounds, vec2 maxBounds) {
vec2 range = maxBounds - minBounds;
vec2 normalized = (uv - minBounds) / range;
normalized = 1.0 - abs(mod(normalized, 2.0) - 1.0);
return minBounds + normalized * range;
}
This creates a seamless mirrored repeat within each subtexture boundary, so samples never bleed into neighboring textures.
Debug Mode
There is a debug mode that visualizes the anisotropy ratio as a heatmap thingie:
if (DEBUG_ANISO != 0) {
vec4 anisoColor = mapAniso(anisotropy, 256.0);
return mix(anisoColor, baseColor, 0.4);
}
Blue = isotropic (viewed straight up), yellow = moderate anisotropy, red = high anisotropy (viewed from the side).
Conclusion
Originally I had another section glazing this shit and why it’s the next best thing since sliced bread but I got bored. You probably got the gist by now. ;)
If you’re rendering with texture atlases and want anisotropic filtering, hardware AF won’t help you. 💀 Roll your own in the shader. It’s like 90 lines of (admittedly pretty fucked) GLSL code and makes your game look real smooth.
Now you can finally stop asking me to write it up ;)
Here’s a permalink to the shader if you want to learn or copypaste: https://github.com/Pannoniae/BlockGame/blob/master/src/assets/shaders/inc/af.inc.glsl
>> Home