-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batched matrix multiplication. #1261
base: main
Are you sure you want to change the base?
Batched matrix multiplication. #1261
Conversation
split dimension is a batch dimension
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1261 +/- ##
==========================================
+ Coverage 91.87% 91.93% +0.05%
==========================================
Files 80 80
Lines 11822 12041 +219
==========================================
+ Hits 10862 11070 +208
- Misses 960 971 +11
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
heat/core/linalg/basics.py
Outdated
@@ -487,9 +487,12 @@ def matmul(a: DNDarray, b: DNDarray, allow_resplit: bool = False) -> DNDarray: | |||
sanitation.sanitize_in(a) | |||
sanitation.sanitize_in(b) | |||
|
|||
if a.gshape[-1] != b.gshape[0]: | |||
batch_dim = max(a.ndim, b.ndim) - 2 | |||
batched = batch_dim > 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe > 0
instead of >2
or did I understand sth wrong?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, you're right, it should be 0.
Sieht gut aus soweit 👍 Vorschläge für die weitere Arbeit:
|
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
…gebra_for_arrays_with_dimension_2_in_particular_matmul
Thank you for the PR! |
Regarding the possible memory issues of #360, I benchmarked the maximal memory usage of matmul for two n x n matrices in the possible split cases. |
split dimension is a batch dimension
Due Diligence
main
for new features, latest release branch (e.g.release/1.3.x
) for bug fixesDescription
Issue/s resolved: #
Changes proposed:
Type of change
Memory requirements
Performance
Does this change modify the behaviour of other functions? If so, which?
yes / no