Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Blockwise Op #1215

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open

Add Blockwise Op #1215

wants to merge 23 commits into from

Conversation

purna135
Copy link
Contributor

@purna135 purna135 commented Sep 26, 2022

This PR builds off of #757 and closes #695.

To #757 it adds:

  • get_output_info(), which is the same as Elemwise get_output_info(), to make all inputs of the same dimension.
  • derive DimShuffle's gufunc signature
  • reduce the broadcasted dimensions of inputs after the grad is computed

Differences with #757:

  • instead of using the dimensions from the start for computing the curr_static_shape of core_inp_grads use the dimensions from the end.
  • an extra check before calling perform() of DimShuffle (which can be removed later)

@purna135 purna135 closed this Sep 26, 2022
@purna135 purna135 reopened this Sep 26, 2022
@purna135 purna135 changed the title Update L_op of Blockwise Add Blockwise op Sep 26, 2022
@codecov
Copy link

codecov bot commented Sep 26, 2022

Codecov Report

Merging #1215 (6cda5c3) into main (462d8d5) will increase coverage by 4.14%.
The diff coverage is 86.00%.

❗ Current head 6cda5c3 differs from pull request most recent head c7b0d10. Consider uploading reports for the commit c7b0d10 to get more accurate results

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1215      +/-   ##
==========================================
+ Coverage   75.02%   79.16%   +4.14%     
==========================================
  Files         194      174      -20     
  Lines       50099    48677    -1422     
  Branches    12096    10359    -1737     
==========================================
+ Hits        37586    38536     +950     
+ Misses      10189     7640    -2549     
- Partials     2324     2501     +177     
Impacted Files Coverage Δ
aesara/tensor/blockwise.py 85.81% <85.81%> (ø)
aesara/tensor/math.py 90.05% <100.00%> (-0.62%) ⬇️

... and 122 files with indirect coverage changes

@brandonwillard brandonwillard added enhancement New feature or request important NumPy compatibility Op implementation Involves the implementation of an Op labels Sep 26, 2022
@brandonwillard brandonwillard changed the title Add Blockwise op Add Blockwise Op Sep 26, 2022
@brandonwillard brandonwillard mentioned this pull request Sep 26, 2022
@brandonwillard
Copy link
Member

brandonwillard commented Sep 26, 2022

Don't forget to rebase onto upstream/main (or whatever the name of your remote for this repository is); that should remove the merge commit in this branch.

Copy link
Member

@brandonwillard brandonwillard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! The next step involves extending the number of gufunc_sigs we specify and adding the associated tests.

The big, open question is whether or not we can replace Elemwise with this new Op. When we demonstrate that this Op can at least handle all the standard Elemwise cases, then we'll start exploring this question further, though. In other words, we don't want to start considering all the other changes (e.g. Blockwise.c_code, Numba/JAX transpilations, etc.) until we've demonstrated good test coverage (both Elemwise/scalar broadcasting cases and otherwise).

x = Blockwise(op)(*args)
x_fn = aesara.function(args, x)

x_fn(*arg_vals)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're going to need to assert something about this output.

Comment on lines +1891 to +1895
gufunc_sig = ((("m", "n"), ("n", "p")), (("m", "p"),))

__props__ = ("gufunc_sig",)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: We'll need to create these kinds of signatures for every applicable Op.

tests/tensor/test_blockwise.py Outdated Show resolved Hide resolved
aesara/tensor/blockwise.py Show resolved Hide resolved
@purna135
Copy link
Contributor Author

What should be the signature for Subtensor Op and Shape Op ?

@brandonwillard
Copy link
Member

What should be the signature for Subtensor Op and Shape Op ?

If you're talking about constructing symbolic graphs, the signatures are ultimately determined by their Op.make_node implementations.

@purna135
Copy link
Contributor Author

If you're talking about constructing symbolic graphs, the signatures are ultimately determined by their Op.make_node implementations.

Yes, got it now

@purna135
Copy link
Contributor Author

Hello, @brandonwillard.
I'm having some DimShuffle related problems that I can't figure out.
Could you please take a look and assist in determining which piece of logic is causing this error?

You can reproduce the error using the following command.
pytest tests/tensor/test_blockwise.py::test_blockwise_solve_grad[a_shape0-b_shape0]

@brandonwillard
Copy link
Member

Hello, @brandonwillard. I'm having some DimShuffle related problems that I can't figure out. Could you please take a look and assist in determining which piece of logic is causing this error?

You can reproduce the error using the following command. pytest tests/tensor/test_blockwise.py::test_blockwise_solve_grad[a_shape0-b_shape0]

It looks like SolveBase.L_op is producing tensors with an extra dimension that Solve2 can't handle.

I'm guessing Solve2 was meant to serve as a specialization of Solve's more general (matrix x matrix) -> matrix signature, but its inherited L_op probably doesn't match the signature change.

Regardless, we shouldn't need new Ops for that; instead, a helper function like aesara.tensor.slinalg.solve can be used to project the inputs and outputs to and from Solve's signature's space.

Copy link
Member

@brandonwillard brandonwillard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added comments for some of the changes we made locally during the meeting.

),
(("n", "m"),),
)
__props__ = ("dtype", "gufunc_sig")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
__props__ = ("dtype", "gufunc_sig")
__props__ = ("dtype",)

@@ -3502,7 +3517,8 @@ class AllocDiag(Op):
It does the inverse of `ExtractDiag`.
"""

__props__ = ("offset", "axis1", "axis2")
gufunc_sig = (((),), (("m", "m"),))
__props__ = ("offset", "axis1", "axis2", "gufunc_sig")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
__props__ = ("offset", "axis1", "axis2", "gufunc_sig")
__props__ = ("offset", "axis1", "axis2",)

return Apply(self, list(inputs), outputs)

def __str__(self):
return f"{type(self).__name__}{{op={self.op}}}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return f"{type(self).__name__}{{op={self.op}}}"
return f"{type(self).__name__}{{{self.op}, {self.signature}}}"

Comment on lines +397 to +407
# The gradient contains a constant
# res = aesara.tensor.basic.constant(
# np.asarray(var.data), dtype=var.type.dtype
# )
res = var

# TODO FIXME: Use dimensions of relevant/appropriate inputs.
# What exactly are those in this case?
nd = inputs[0].type.ndim

return atleast_Nd(res, n=nd)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# The gradient contains a constant
# res = aesara.tensor.basic.constant(
# np.asarray(var.data), dtype=var.type.dtype
# )
res = var
# TODO FIXME: Use dimensions of relevant/appropriate inputs.
# What exactly are those in this case?
nd = inputs[0].type.ndim
return atleast_Nd(res, n=nd)
return var


__props__ = ("lower", "destructive", "on_error")
gufunc_sig = ((("m", "m"),), (("m", "m"),))
__props__ = ("lower", "destructive", "on_error", "gufunc_sig")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
__props__ = ("lower", "destructive", "on_error", "gufunc_sig")
__props__ = ("lower", "destructive", "on_error",)

Comment on lines +265 to +267
from aesara.tensor.basic import Tri

blk_op = Blockwise(op=Tri(dtype="float64"), signature=(((), (), ()), (("n", "m"),)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
from aesara.tensor.basic import Tri
blk_op = Blockwise(op=Tri(dtype="float64"), signature=(((), (), ()), (("n", "m"),)))
blk_op = Blockwise(op=Tri(dtype="float64"))

blk_op = Blockwise(op=Tri(dtype="float64"), signature=(((), (), ()), (("n", "m"),)))
out_dtype, output_shapes, inputs = blk_op.get_output_info(a, b, c)

assert out_dtype == ["float64"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to assert something about output_shapes (i.e. make sure they're correct in some way).

ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request May 17, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request May 17, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request May 17, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request May 17, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request May 18, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 21, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 22, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 22, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Jun 23, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Aug 25, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Aug 25, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Aug 25, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Sep 5, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Sep 5, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
ricardoV94 added a commit to ricardoV94/pytensor that referenced this pull request Sep 5, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
twiecki pushed a commit to pymc-devs/pytensor that referenced this pull request Sep 6, 2023
Inspired by: aesara-devs/aesara#1215

Co-authored-by: Brandon T. Willard <[email protected]>
Co-authored-by: Purna Chandra Mansingh <[email protected]>
Co-authored-by: Sayam Kumar <[email protected]>
Co-authored-by: Kaustubh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request important NumPy compatibility Op implementation Involves the implementation of an Op
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create an Op for NumPy's generalized ufuncs
2 participants