-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Fix caching logic in SolovayKitaevSynthesis
plugin
#14304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…is gates into account
One or more of the following people are relevant to this code:
|
Pull Request Test Coverage Report for Build 14881523118Details
💛 - Coveralls |
I think it would be safer to just invalidate the cache if the basis gate set changes. Storing multiple basic approximations could become quite memory-heavy and I'm not sure this is actually a use case we even need. Users could also explicitly load the basic approximations they care about, if they need to change in between them. So, unless we have an explicit case for caching multiple decompositions, I think just invalidating the cache is safer and simpler 🙂 |
As discussed offline, to be fully correct, we should also cache the In practice the number of gate sequences we need to store to is surprisingly small: for instance, we only need to store 812 sequences for the default basis set My only mild concern is that since |
Hm yes I guess that if you would have multiple processes with different depths or basis gates then that could be inefficient, since it would have to regenerate the sets a lot. I don't know if Python's multiprocessing has some kind of lock on used variables or if that could also result in wrong behavior... I don't think we can assume that no one will ever run in parallel, given that the pass manager tries running multiple processes (though here it'll use the same settings for the plugin). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks for the fix!
Multiprocessing doesn't have shared memory, that's kind of the point for Python and is how you get around GIL contention. Each process space has separate memory and when we do the parallel dispatch we make a copy of the pass manager (via pickle that's sent via the IPC to the subprocess). So the cache will be separate when running in parallel. There is potentially some reuse if you have more circuits than cpus the pass managers will potentially get reused per worker, but the execution is serialized in those cases. |
* extending SolovayKitaevSynthesis plugin caching logic to take the basis gates into account * regenerating cached values following review comment (cherry picked from commit 6d4c928)
@Mergifyio backport stable/1.4 |
✅ Backports have been created
|
* extending SolovayKitaevSynthesis plugin caching logic to take the basis gates into account * regenerating cached values following review comment (cherry picked from commit 6d4c928)
* extending SolovayKitaevSynthesis plugin caching logic to take the basis gates into account * regenerating cached values following review comment (cherry picked from commit 6d4c928) Co-authored-by: Alexander Ivrii <alexi@il.ibm.com>
* extending SolovayKitaevSynthesis plugin caching logic to take the basis gates into account * regenerating cached values following review comment (cherry picked from commit 6d4c928) Co-authored-by: Alexander Ivrii <alexi@il.ibm.com>
Summary
Fixes a problem described here: running the
SolovayKitaevSynthesis
unitary synthesis plugin repeatedly incorrectly reuses the set of basis gates specified on the very first run. For example,synthesizes both times to the first specified basis set of
["h", "t", "tdg"]
. Creating two differentSolovayKitaevSynthesis
plugins does not solve this, since the caching is done via a class-level attribute.The fix (many thanks to @eliarbel for the suggestion!) consists of storing a dictionary from the basis gate sets to the correspoding basis approximations. UPDATE: after review and online discussion, we have decided not to store the dictionary but instead to regenerate the basic approximations whenever the set of basis gates or the depth used for generation changes from the previous run.
Needed for #14225.