-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: sparse
element-wise multiplication returns wrong indptr
on CUDA
#1273
Comments
sparse
element-wise multiplication returns wrong indptr
sparse
element-wise multiplication returns wrong indptr
on CUDA
I tried reproducing it on my local machine. Code: A = [[0, 0],
[1, 0],
[0, 2]]
B = [[1, 0],
[0, 0],
[2, 3]]
a = torch.tensor(A, device='cuda:0').float().to_sparse_csr()
b = torch.tensor(B, device='cuda:0').float().to_sparse_csr()
print(a * b) Output Torch 2.0.0:
Output Torch 2.1.1:
The value 0 that is obtained after multiplication is considered significant in the new version. And this happens only when run on GPU. What do you think we should do, @ClaudiaComito? |
Brilliant @Mystic-Slice , thanks for looking into this! I think you should go ahead and report it to PyTorch. When we merge support for PyTorch 2.1, we'll skip that test until a fix is out. Does that sound reasonable? |
Yeah. Sounds good. |
Reported here. Thanks again @Mystic-Slice ! |
This issue is stale because it has been open for 60 days with no activity. |
What happened?
While running our unit tests with PyTorch 2.1, the
sparse
module tests failed on GPU (see error message below). Tests passed on CPU.Failure is on any number of processes, single-GPU as well as multi-GPU.
Tested with CUDA only, not yet with ROCm.
Tagging @Mystic-Slice in case he wants to explore.
Python was actually 3.11, PyTorch 2.1.0. Will update the issue template.
Code snippet triggering the error
Error message or erroneous outcome
Version
1.3.x
Python version
None
PyTorch version
None
MPI version
No response
The text was updated successfully, but these errors were encountered: