-
Notifications
You must be signed in to change notification settings - Fork 296
Investigate and reduce benchmark changes for mesh timestamps #6549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6549 +/- ##
=======================================
Coverage 89.89% 89.89%
=======================================
Files 90 90
Lines 24138 24142 +4
Branches 4492 4493 +1
=======================================
+ Hits 21699 21703 +4
Misses 1679 1679
Partials 760 760 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
⏱️ Performance Benchmark Report: 371284cPerformance shifts
Full benchmark results
Generated by GHA run |
⏱️ Performance Benchmark Report: 85ef2caPerformance shifts
Full benchmark results
Generated by GHA run |
These benchmarks I think redeem the previous slow-downs, but used dodgy logic to do so and caused test failures. Note that we did expect adding timestamps would introduce slowdowns, but not overwhelmingly significant ones. |
⏱️ Performance Benchmark Report: 217571ePerformance shifts
Full benchmark results
Generated by GHA run |
So, this latest solution still doesn't seem to be cutting it, which suggests most of the slowdowns are that they are accessing metadata repeatedly, and therefore constantly refreshing points and bounds, which is comparatively slow. I think the only other solution I can think of would be to create a second timestamp, one for metadata, one for points and bounds, and to only update points and bounds if they have been updated. This makes the solution messier, but should remove the massive slow down. I considered explicitly checking that EDIT: |
⏱️ Performance Benchmark Report: 618d938Performance shifts
Full benchmark results
Generated by GHA run |
⏱️ Performance Benchmark Report: 95bc8aaPerformance shifts
Full benchmark results
Generated by GHA run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possible extra speed bonuses could include:
- update only the metadata that is called for (might not be possible)
- make the metadata timestamped and add a gate to getattribute
⏱️ Performance Benchmark Report: f02152ePerformance shifts
Full benchmark results
Generated by GHA run |
The biggest changes from before the original PR was merged are below.
|
⏱️ Performance Benchmark Report: cc35162Performance shifts
Full benchmark results
Generated by GHA run |
Benchmarks have retained the improvement, though metadata_equality seems have gotten even faster. I'm not convinced that's meaningful, but might be worth another run locally. EDIT: After running multiple times locally, it's consistantly in the 500s. I'm fine calling it an anomaly, and not rerunning the benchmarks on this PR. |
OK I've re-checked the benchmark results against those from #6465, and I'm totally happy that the remaining shifts are expected and acceptable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had a good look, and only found one bad comment left over from earlier behaviour
Otherwise all looks good to me now !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, yes, lets go !!
Huge thanks @ESadek-MO |
🚀 Pull Request
Investigating #6547.
This has slowed down benchmarks significantly. The cause of this is that every time metadata is updated, so are points and bounds, even if they don't need updating. Updating points and bounds causes significant slowdowns.