Skip to content

Conversation

mdhaber
Copy link
Contributor

@mdhaber mdhaber commented Dec 19, 2024

Reference issue

Follow-up to gh-22127
Toward gh-21466

What does this implement/fix?

This adds support for n-dimensional batch input to linalg functions diagsvd, inv, null_space, logm, sinm, cosm, tanm, sinhn, coshm, tanhm, sqrtm, funm, signm, fractional_matrix_power, pinv, pinvh, matrix_balance, bandwidth, svd, ldl, cholesky, polar, qr, rq, hessenberg, schur, orth, lu_factor, eig_banded, and eigvals_banded because they all accept only one array input. (Please check me on this.)

Additional information

Even though functions like cosm are expressed in terms of expm, we might as well apply the decorator to them separately because expm just loops in Python, too. This is less work and we get the batch support documentation for free.

@j-bowhay
Copy link
Member

j-bowhay commented Dec 19, 2024

Xref #21446 (comment) it might be worth checking if the decorator is needed for some of these functions

Edit: oops should have read the additional info first

@j-bowhay j-bowhay added this to the 1.16.0 milestone Dec 19, 2024
@mdhaber mdhaber changed the title ENH: linalg: add batch support for matrix in -> single output ENH: linalg: add batch support for functions that accept a single array Dec 20, 2024
@mdhaber mdhaber requested a review from j-bowhay December 20, 2024 01:31
Copy link
Member

@j-bowhay j-bowhay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe also worth passing the batched input a keyword argument in the tests?

@mdhaber
Copy link
Contributor Author

mdhaber commented Dec 20, 2024

Good idea, or if it's ok with you, I'll just inspect for the name of the argument? I didn't in the initial PR to keep things "simple", but I'll show that the overhead is negligible, and clearly it's less prone to error. Well, I'll start by adding it to the test.

@mdhaber mdhaber requested a review from j-bowhay December 20, 2024 07:23
@ilayn
Copy link
Member

ilayn commented Dec 20, 2024

Not to block any of these nice additions;

For the matrix functions, I have some time to finish up sqrtm which probably makes logm also reachable for batch handling. I can probably make them behave like expm. Would that be an anti-pattern for the decorator usage you are adding here?

@j-bowhay
Copy link
Member

Not to block any of these nice additions;

For the matrix functions, I have some time to finish up sqrtm which probably makes logm also reachable for batch handling. I can probably make them behave like expm. Would that be an anti-pattern for the decorator usage you are adding here?

I think the idea is for any of these is, if batching in the compiled code materialises then the decorator can be removed. Plus this means there's already a test for batching there ready and waiting:)

@mdhaber
Copy link
Contributor Author

mdhaber commented Dec 20, 2024

Agreed - just adding that something would need to be done for the documentation, since the decorator adds the standard note right now.
Also @ilayn it looks like expm is just looping in Python.

# Main loop to go through the slices of an ndarray and passing to expm
for ind in product(*[range(x) for x in a.shape[:-2]]):
aw = a[ind]

Is that what you had in mind? I would not add more of that when the decorator does the same thing.

@mdhaber
Copy link
Contributor Author

mdhaber commented Dec 20, 2024

One of the CI failures was a test that wanted to see an error with 3D input, so I removed that. The other involved CWT with sparse input. The decorator doesn't consider that right now, so I just removed it from CWT.

@ilayn
Copy link
Member

ilayn commented Dec 20, 2024

That python loop is not affecting since the expm computation typically dominating the runtime cost. So I decided to leave it on the Python side instead of dealing with memory addresses on the C side for slicing. But for very light wrappers around LAPACK calls, it can be slow to loop in Python.

I just wanted to know if we can remove the decorator and still be consistent in case we do the batching internally. I guess we can with a bit of extra work.

Copy link
Member

@j-bowhay j-bowhay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@j-bowhay j-bowhay merged commit bb44a07 into scipy:main Dec 20, 2024
37 checks passed
@lucascolley lucascolley added the needs-release-note a maintainer should add a release note written by a reviewer/author to the wiki label Jan 31, 2025
@mdhaber mdhaber removed the needs-release-note a maintainer should add a release note written by a reviewer/author to the wiki label May 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.linalg
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants