Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changed behavior after BatchNormToAffine transformation #50

Open
auphelia opened this issue Mar 29, 2023 · 0 comments
Open

Changed behavior after BatchNormToAffine transformation #50

auphelia opened this issue Mar 29, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@auphelia
Copy link
Contributor

Quick summary

A standalone BatchNormalization node with certain settings (see .onnx file in .zip folder: bn_model.zip) changes its functional behavior, when transformed with the BatchNormToAffine transformation.

Steps to Reproduce

  1. The issue was observed when using the FINN docker container, but with the current main branch of qonnx (commit hash: 12c96a3ded06beacab08e0f554e4ed014476c0aa).
  2. Run transformation BatchNormToAffine on ONNX file.
  3. Execute model before and after the transformation with random floating point input (x = gen_finn_dt_tensor(DataType["FLOAT32"], (1, 64, 64, 64)) inp_dict = {"global_in": x})
  4. Compare execution of the model before and after the transformation.

Expected behavior

The outputs before and after do not match.

Actual behavior

The functional behavior should not change due to the transformation.

Possible fix

It seems to be a rounding error, coming from this calculation: A = scale / np.sqrt(epsilon + variance)

@auphelia auphelia added the bug Something isn't working label Mar 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant