Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[quant][pt2e] Fix conv-bn weight + bias per channel QAT #125208

Closed
wants to merge 1 commit into from

Commits on Apr 30, 2024

  1. [quant][pt2e] Fix conv-bn weight + bias per channel QAT

    Summary: This commit fixes the pattern matching for conv-bn
    during QAT fusion where both weight and bias are quantized per
    channel. Previously this failed because weights and biases used
    the same example kwargs for their scales and zero points,
    causing these qparams to be tied during pattern matching.
    
    Test Plan:
    python test/test_quantization.py TestQuantizePT2EQAT_ConvBn1d.test_qat_conv_bn_per_channel_weight_bias
    python test/test_quantization.py TestQuantizePT2EQAT_ConvBn2d.test_qat_conv_bn_per_channel_weight_bias
    
    Reviewers: jerryzh168, angelayi
    
    Subscribers: jerryzh168, angelayi, supriyar
    
    [ghstack-poisoned]
    andrewor14 committed Apr 30, 2024
    Configuration menu
    Copy the full SHA
    90e6ebd View commit details
    Browse the repository at this point in the history