-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Features/1400 implement unfold operation similar to torch tensor unfold #1419
base: main
Are you sure you want to change the base?
Features/1400 implement unfold operation similar to torch tensor unfold #1419
Conversation
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
1 similar comment
Thank you for the PR! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1419 +/- ##
==========================================
+ Coverage 91.71% 91.73% +0.02%
==========================================
Files 80 80
Lines 11656 11692 +36
==========================================
+ Hits 10690 10726 +36
Misses 966 966
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
The tests on the CUDA-runner seem to hang at |
…> chunk_size more tests
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
1 similar comment
Thank you for the PR! |
Thank you for the PR! |
On the Terrabyte cluster, using 8 processes on 2 nodes with 4 GPUs each I get the following error:
on CPU, everything seems to work (at least in |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
@ClaudiaComito I have now added a PR description and tests with different datatypes. |
Thank you for the PR! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @FOsterfeld for this PR, this is going to be really useful. I have a few change requests, mostly about simplifying the code by using existing methods. Thanks a lot!
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
…ided-halo Support one-sided halo for DNDarrays
Thank you for the PR! |
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
Due Diligence
Description
Add the function unfold to the available manipulations. unfold(a, dimension, size, step) for a DNDarray a behaves like torch.Tensor.unfold.
Example:
Issue/s resolved: #1400
Changes proposed:
Type of change
Memory requirements
Performance
Does this change modify the behaviour of other functions? If so, which?
no