-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Further testing for preserving tensor context with operations #503
Comments
Good catch thanks @AtomicCactus! Could you open a small PR to fix the issue? |
Sure thing! #504 |
Thanks @AtomicCactus! I reviewed the PR -- for test we could look at dtype rather than device. Our current CI pipeline doesn't have GPU support. |
Describe the bug
Decomposing a 2D tensor along both modes, while specifying two ranks results in an error because internally there's a tensor created on the CPU as part of the process, which cannot be concatenated with the GPU.
Works fine when the rank is specified as an integer, but not as a list:
rank=16
worksrank=[16,16]
crashesWorks fine on the CPU, but performance is not the same.
Steps or Code to Reproduce
Expected behavior
Tucker decomposition should not fail when ranks are provided as an array of values.
Actual result
Versions
The text was updated successfully, but these errors were encountered: