✨ When Derivatives Go Wrong: Screen-Space Decal Artifacts
Table of Contents
Today we are going to take a look at a peculiar artifact that can appear when objects are rendered in front of screen-space decals. We are also going to look at why this is especially bad when dithered transparency is used.
You might have seen it in your scenes before, especially if they are decal-heavy, or if you use dithering. It’s not always visible as it depends a bit on the decals themselves, but when it is, it can be quite distracting (at least if you know what to look for). While this is not a Unity specific issue (you can actually find it in some commercial games shipped on other engines), this article is written in the context of Unity, since that is what I’ve been using for this. However, most information applies generally, unless stated otherwise.
Notice the bright white/gray pixels around the borders of the left character.
Background
I actually first noticed this issue back in the early pre-production of Deliver At All Costs and also revisited it a few times during the production when we had problems with it. A simple workaround to this problem is to disable mip maps for the textures used in the offending decals. This completely removes the artifacts around objects, but doing this may decrease performance and introduce the aliasing that mip maps are intended to fix, so it is not a preferable solution. While I was able to figure out the cause of the artifacts and why they were happening, I didn’t have enough time to come up with a solution for them. So unfortunately we had to ship with them.
But a few months ago I finally got some time to work on it, so in this post I am going to explain why the artifact appears, and introduce an alternative solution that I came up with, without resorting to disabling mip maps.
Screen Space Partial Derivatives
Shaders are executed in big groups called waves, where each single running instance of the shader, called a lane, executes the same instructions in parallel and in lockstep with each other. Because of this architecture it is actually possible for shader lanes to inspect the variables of other running lanes that are part of the same wave.
One use for this cross-reading abililty is in the calculation of partial derivatives. To facilitate this, waves are divided into 2x2 pixel blocks, often called quads. You can use the ddx and ddy functions in HLSL to compare the result of any expression with that of the neighboring pixel along the x or y axis within the same quad, depending which of the two functions was used.
It is worth noting that the GPU always divides pixel waves into these 2x2 quads, whether derivatives are explicitly used or not.
Image borrowed from DMGregory on Stackoverflow. If you want to learn more about about screen space derivatives, you should check out their post.
The partial derivatives are used by the GPU for a few very important things. Namely, mip-mapping, and anisotropic filtering. In order to determine which mip-level and how anisotropic filtering should be applied when sampling a texture, the GPU calculates the partial derivatives of the uv coordinates passed into the Sample function. Depending on the rate of change of the uv, the GPU can deduce what the optimal texel density is in the 2x2 quad. It is also able to deduce if the surface is viewed at a steep angle which is necessary for anisotropic filtering.
Surface Discontinuities and Derivatives
What happens with the derivatives at the edge of a surface? Imagine a 2x2 quad where some of its pixels lie outside of the mesh that is being rendered, how can the GPU get the uv coordinates to calculate the derivatives from a pixel that is not being drawn?
The GPU actually has to cheat to solve this. What it does is that it runs the pixel shader for these “off-mesh” pixels anyway, it just prevents those lanes from writing any output. Since these helper lanes execute the same code, the GPU can use the uvs they calculate for the derivatives that the normal lanes need.
As you can imagine, these helper lanes are not free, they are full running instances of the shader doing all the same operations and calculations as the normal lanes. This means that we are actually doing work for more pixels on the screen than is actually covered by the mesh. This is also the reason why “micro-triangles” are bad for performance - since the GPU always rasterizes triangles in 2x2 blocks, the minimum number of pixel shader instances that must execute is 4, even if the triangle only takes up a single pixel. But this is a topic that is out of scope for this article. You may want to check out this post if you want to learn more: How bad are small triangles on GPU and why?.
While the GPU handles surface discontinuities for us at the triangle level to get the proper uvs, it doesn’t handle discontinuities introduced by us in the shader. This is actually where we get to the root cause of the artifacts we saw before.
How Screen Space Decals are Projected
Now that we understand how derivatives work, we can look at how decals are actually rendered. Unity’s render pipelines use screen space deferred decals. They are rendered into a separate group of buffers collectively known as the D-buffer. This is done after the depth pre-pass, but before the geometry pass. Unity later applies the D-buffer data on top of the normal geometry PBR data before the lighting calculations when rendering the scene. This works well with both forward and deferred rendering. For deferred, it would also be technically possible to render the decals directly on top of the G-buffer, but the principle would be effectively the same, just without the intermediate buffer.
Screen space decals generally work by rendering a transformed cube with a special shader that projects a texture in a specific direction, this is called a projector. Any pixels of the cube that do not cover the underlying surface that the projection is facing, are discarded.

Mesh decals in contrast render the decal as a regular quad - no projection. In mesh decals we have access to uvs from the vertices, but when we use projectors, as in the screen space method, we do not get any usable uvs from the geometry, so we must generate them ourselves. The way the uvs are calculated in screen space decals is that the shader looks at the depth buffer and reconstructs the world position for each pixel based on the depth value. It then uses this position to do planar uv mapping in world or object-space (e.g. use positionWS.xz as uv coordinates), in the projection direction.
The surface discontinuity is introduced already at the depth stage, since the depth buffer contains the final scene depth after all objects have been rendered. Imagine that there is an object in front of a decal projector in the scene, when the decal projector is rendering it will look at the depth texture and see that object in front. Because the object is in front, we should not render the decal for those pixels, so they are discarded. The GPU will actually still run the shader to allow derivatives to be calculated from the discarded lanes, but because of the big difference in depth value between the pixel on the occluding object versus a pixel on the surface we are projecting on, the derivative between those pixels becomes much larger than expected. This causes the GPU to select a very high mip level, as it believes that the object is very small or at a very steep angle. Often, the highest mip is selected, which is just a single pixel with the average color of the whole texture. This is where the artifact gets its color from, and also explains why disabling mip maps on the texture resolves the issue.
To understand what is going on better, let’s have a look at what the shader code looks like. This is how Unity does it in URP, and the implementation in HDRP is almost identical. (Unity version: 6000.3)
void Frag(Varyings input, OUTPUT_DBUFFER(outDBuffer))
{
float2 positionCS = input.positionCS.xy;
// First we load the depth from the depth buffer for the current pixel.
#if UNITY_REVERSED_Z
float depth = LoadSceneDepth(positionCS.xy);
#else
// Adjust z to match NDC for OpenGL
float depth = lerp(UNITY_NEAR_CLIP_VALUE, 1, LoadSceneDepth(positionCS.xy));
#endif
// Then we reconstruct the world space position from the depth.
float2 positionSS = positionCS * _ScreenSize.zw;
float3 positionWS = ComputeWorldSpacePosition(positionSS, depth, UNITY_MATRIX_I_VP);
// Transform position into object space for clipping and uv projection.
float3 positionOS = TransformWorldToObject(positionWS);
positionOS = positionOS * float3(1.0, -1.0, 1.0);
// The mesh is a unit cube. If the position is not inside of the cube, it has to be discarded.
// This can happen if there is an occluding object in front of the decal, discarding ensures
// that the decal is not rendered on top of that object too.
float clipValue = 0.5 - max(max(abs(positionOS).x, abs(positionOS).y), abs(positionOS).z);
clip(clipValue); // discard if position is outside the cube mesh
// Uv is planar projected on the Y-axis. This uv is not always continuous over a 2x2 pixel
// quad since we discard pixels above!
float2 uv = positionOS.xz + float2(0.5, 0.5);
// Sample textures and do any pixel shader logic. Here the GPU will use the potentially
// discontiguous uvs for mip mapping...
DecalSurfaceData surfaceData;
surfaceData.baseColor = SAMPLE_TEXTURE2D(_ColorTexture, sampler_ColorTexture, uv);
// etc...
ENCODE_INTO_DBUFFER(surfaceData, outDBuffer);
}
A Worst Case With Dithering
Dithering is a common technique in games to achieve transparency without using alpha blending. It works by skipping rendering pixels according to a repeating pattern with varying density depending on the desired level of transparency. Because this introduces lots of small holes in geometry, and these holes are visible in the depth buffer, dithering can be a worst case scenario for this artifact.
In this picture there is a dither pattern, where every other pixel is rendered. This causes all pixels within the dithered squares to exhibit the discontinuity issue that we previously explored. Notice that the decals behind the square lose all detail, and you are actually able to make out the shape of the individual decal projectors due to them all being forced to the highest mip level, this includes the alpha channel used for transparency.
We had issues with this in a handful of places in Deliver At All Costs, which I at the time fixed either by manipulating or disabling mip maps on the few specific problematic decals that appeared behind dithered objects. Luckily it was not a lot, and these tricks didn’t have any meaningful negative visual effects in those cases.
My Solution
My solution relies on a threshold value in order to detect the pixels where the artifacts can appear. This may not be ideal, but the benefit is that it is really easy to implement by simply modifying your decal shader graph(s). The idea is simple: if we can detect which pixels cause the discontinuities, we could override the mip level to mip-0 for those pixels. The amount of pixels that we target will be low, and while the resulting color value from mip-0 will not be the exactly match the right value that would have been rendered if we knew the correct mip level, it should be close enough to hide the artifacts.
The threshold value is something that you might have to tune to work well for your scene/game, but I have found that the value 0.04 works well for me. You don’t want the value to be too low as that causes more pixels to be forced to mip-0 unnecessarily, potentially causing aliasing.
Interesting note: Calculating derivatives after a non uniform discard is undefined in some graphics API standards However, in practice, everyone relies on this resulting in predictable results, and most GPU vendors convert the discarded lanes into helper lanes that keep executing to allow derivative calculations to work as expected.
float2 uv; // Let this be the uv input that has the discontinuity
// Compute the maximum change in any direction. Either coarse or fine derivatives work.
float2 uvddx = ddx(uv);
float2 uvddy = ddy(uv);
float absmax = max(max(abs(uvddx.x), abs(uvddx.y)),
max(abs(uvddy.x), abs(uvddy.y)));
// If the change is over a certain threshold, we guess that we have detected a discontinuity. This
// value was derived emperically by using a debug shader to find a suitable amount of coverage.
const float k_threshold = 0.04;
if (absmax > k_threshold)
{
// Forcing the derivatives to zero effectively causes mip-0 to be selected.
uvddx = float2(0, 0);
uvddy = float2(0, 0);
}
//
float4 color = SAMPLE_TEX2D_GRAD(_ColorTexture, sampler_ColorTexture, uv, uvddx, uvddy);
// etc...
This is the result:
Shader Graph Implementation
You can easily implement this technique in a shader graph. Here is a custom node wrapping the derivative calculations.

You will also need a custom function node to call the SAMPLE_TEX2D_GRAD function.

Now you simply use these nodes to do all your texture samples in your decal shader.

Using the boolean pin it is also easy to implement a debug mode allowing you to tune the threshold value. This image displays all the pixels that are over the threshold in red.

Closing Thoughts
I’m happy with the results of this approach and with how simple it is to implement. That said, I would still like to explore an alternative that doesn’t rely on a fixed threshold value - something to revisit another day.
After writing this post, I came across an article by Bart Wronski that dives much deeper into this problem and discusses several alternative solutions. If you found this topic interesting, I highly recommend giving it a read: Fixing screen-space deferred decals.