Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FXC/D3DCompile() bug causes float literals to be emitted as the wrong value (CS.D2D1) #780

Open
rickbrew opened this issue Mar 11, 2024 · 0 comments
Labels
bug 🐛 Something isn't working untriaged 🧰 A new issue that needs initial triage

Comments

@rickbrew
Copy link
Collaborator

rickbrew commented Mar 11, 2024

This appears to be a bug in how fxc / D3DCompile is parsing float literals, but the workaround seems easy enough. Link to ridiculously long Discord conversation: https://discord.com/channels/590611987420020747/996417435374714920/1216522083626913922

tl;dr: Float literals should always be emitted as asfloat(uint_representation_of_float_value) instead of as actual float literals. This works around a bug in the shader compiler. (Alternately, they can be emitted as double-with-cast, e.g. (float)1234.56L. The shader compiler is smart enough to emit it directly w/o actually using doubles.)

Debugging this consumed my entire day, and is probably causing all sorts of small errors in many shaders that everyone has for CS.D2D1. My guess is that regular ComputeSharp(non-D2D1) is not affected since it doesn't use fxc / D3DCompile().

Consider these two shaders, which are identical except that the x,y of the return value is swapped:

[D2DInputCount(0)]
[D2DGeneratedPixelShaderDescriptor]
internal readonly partial struct BadShader1
    : ID2D1PixelShader
{
    public float4 Execute()
    {
        return new float4(131072.65f, (float)131072.65, 0.0f, 1.0f);
    }
}

[D2DInputCount(0)]
[D2DGeneratedPixelShaderDescriptor]
internal readonly partial struct BadShader2
    : ID2D1PixelShader
{
    public float4 Execute()
    {
        return new float4((float)131072.65, 131072.65f, 0.0f, 1.0f);
    }
}

The HLSL that is generated is fine:

        /// <inheritdoc/>
        [global::System.CodeDom.Compiler.GeneratedCode("ComputeSharp.D2D1.D2DPixelShaderDescriptorGenerator", "3.0.0.0")]
        [global::System.Diagnostics.DebuggerNonUserCode]
        [global::System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverage]
        static string global::ComputeSharp.D2D1.Descriptors.ID2D1PixelShaderDescriptor<BadShader1>.HlslSource =>
            """
            #define D2D_INPUT_COUNT 0

            #include "d2d1effecthelpers.hlsli"

            D2D_PS_ENTRY(Execute)
            {
                return float4(131072.66, (float)131072.65L, 0.0, 1.0);
            }
            """;
...
        /// <inheritdoc/>
        [global::System.CodeDom.Compiler.GeneratedCode("ComputeSharp.D2D1.D2DPixelShaderDescriptorGenerator", "3.0.0.0")]
        [global::System.Diagnostics.DebuggerNonUserCode]
        [global::System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverage]
        static string global::ComputeSharp.D2D1.Descriptors.ID2D1PixelShaderDescriptor<BadShader2>.HlslSource =>
            """
            #define D2D_INPUT_COUNT 0

            #include "d2d1effecthelpers.hlsli"

            D2D_PS_ENTRY(Execute)
            {
                return float4((float)131072.65L, 131072.66, 0.0, 1.0);
            }
            """;

When running these shaders and reading them back from the CPU, the values seem to be wrong for the float literal (the X value from the first shader, or the Y value from the second shader)

image

The float literal is roundtripping as 131072.703125 instead of 131072.656250. The double-cast-to-float is fine (which the shader compiler emits directly without actually using doubles).

Not shown here is that Hlsl.AsFloat(1207959594U) also works fine (1207959594U being 131072.65 bit-cast to a uint).

I was able to determine that the bytecode is actually different, and that the value emitted by D3DCompile() is just wrong: https://discord.com/channels/590611987420020747/996417435374714920/1216542121297973369
image

So:

  1. The float literal is bad. The shader compile emits the wrong value into the bytecode (131072.703125).
  2. The double literal cast to float is fine. The shader compiler emits the correct value (131072.656250), and does not actually use double precision instructions. I don't know if this is a 100% guarantee though, it's just what happened with this particular code.
  3. The bit-cast from uint to float is fine. The shader compiler emits the correct value (131072.656250). (sorry, it's not in the screenshots, there's just too much to juggle here and I don't want to go recreate all the screenshots etc.)
@rickbrew rickbrew added bug 🐛 Something isn't working untriaged 🧰 A new issue that needs initial triage labels Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working untriaged 🧰 A new issue that needs initial triage
Projects
None yet
Development

No branches or pull requests

1 participant