New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ATen][CUDA][AMP] Fix dtype mismatch in linalg_vector_norm #125175
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/125175
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit ce40913 with merge base 07d3af8 (): NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
I believe the test failure is unrelated. @lezcano, can you take another look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more reasonable. Thank you!
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: pull / linux-focal-cuda12.1-py3.10-gcc9 / test (default, 2, 5, linux.4xlarge.nvidia.gpu) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…25175) Fixes pytorch#125174 Pull Request resolved: pytorch#125175 Approved by: https://github.com/eqy, https://github.com/lezcano
…25175) Fixes pytorch#125174 Pull Request resolved: pytorch#125175 Approved by: https://github.com/eqy, https://github.com/lezcano
Fixes #125174
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @lezcano @mcarilli @ptrblck @leslie-fang-intel @jgong5 @eqy @nWEIdia @tinglvv