-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Use numpy sum instead of comprehension in _score_heuristic #5403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In sabre_swap the comprehension in _score_heuristic() with the basic heuristic starts to become a bottleneck as the size of the coupling map increases. This probably becomes the top bottleneck in a transpilation after Qiskit#5316, Qiskit#5183, Qiskit#5294, Qiskit#5267, and Qiskit#5272 are merged. This commit is an attempt to try and mitigate that a bit by using numpy native operations instead of a python comprehension in sum(). If this is insufficient we'll likely have to either avoid doing a sum like this or drop down to a lower level with cython or rust and operate on the array directly to remove this bottleneck.
Still need to benchmark this to see if it actually makes a difference or not. Since to really see the bottleneck we need a bunch of PRs at once I'll have to setup an environment to benchmark this reliably. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just a tiny comment.
if self.coupling_map._dist_matrix is None: | ||
self.coupling_map._compute_distance_matrix() | ||
return self.coupling_map._dist_matrix[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be better to put these lines to dist_matrix
property in CouplingMap class (if we don't mind one function call).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's a good idea. I made this change in: 8df570f
Summary
In sabre_swap the comprehension in _score_heuristic() with the basic
heuristic starts to become a bottleneck as the size of the coupling map
increases. This probably becomes the top bottleneck in a transpilation
with sabre after #5316, #5183, #5294, #5267, and #5272 are merged. This commit is
an attempt to try and mitigate that a bit by using numpy native
operations instead of a python comprehension in sum(). If this is
insufficient we'll likely have to either avoid doing a sum like this or
drop down to a lower level with cython or rust and operate on the array
directly to remove this bottleneck.
Details and comments