-
Notifications
You must be signed in to change notification settings - Fork 345
fix gc pause metric using millis instead of nanos #6321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Overall package sizeSelf size: 11.9 MB Dependency sizes| name | version | self size | total size | |------|---------|-----------|------------| | @datadog/libdatadog | 0.7.0 | 35.02 MB | 35.02 MB | | @datadog/native-appsec | 10.1.0 | 20.37 MB | 20.37 MB | | @datadog/native-iast-taint-tracking | 4.0.0 | 11.72 MB | 11.73 MB | | @datadog/pprof | 5.9.0 | 9.77 MB | 10.1 MB | | @opentelemetry/core | 1.30.1 | 908.66 kB | 7.16 MB | | protobufjs | 7.5.4 | 2.95 MB | 5.6 MB | | @datadog/wasm-js-rewriter | 4.0.1 | 2.85 MB | 3.58 MB | | @datadog/native-metrics | 3.1.1 | 1.02 MB | 1.43 MB | | @opentelemetry/api | 1.8.0 | 1.21 MB | 1.21 MB | | jsonpath-plus | 10.3.0 | 617.18 kB | 1.08 MB | | import-in-the-middle | 1.14.2 | 122.36 kB | 850.93 kB | | lru-cache | 10.4.3 | 804.3 kB | 804.3 kB | | opentracing | 0.14.7 | 194.81 kB | 194.81 kB | | source-map | 0.7.6 | 185.63 kB | 185.63 kB | | pprof-format | 2.1.0 | 111.69 kB | 111.69 kB | | @datadog/sketches-js | 2.1.1 | 109.9 kB | 109.9 kB | | lodash.sortby | 4.7.0 | 75.76 kB | 75.76 kB | | ignore | 7.0.5 | 63.38 kB | 63.38 kB | | istanbul-lib-coverage | 3.2.2 | 34.37 kB | 34.37 kB | | rfdc | 1.4.1 | 27.15 kB | 27.15 kB | | dc-polyfill | 0.1.10 | 26.73 kB | 26.73 kB | | @isaacs/ttlcache | 1.4.1 | 25.2 kB | 25.2 kB | | tlhunter-sorted-set | 0.1.0 | 24.94 kB | 24.94 kB | | shell-quote | 1.8.3 | 23.74 kB | 23.74 kB | | limiter | 1.1.5 | 23.17 kB | 23.17 kB | | retry | 0.13.1 | 18.85 kB | 18.85 kB | | semifies | 1.0.0 | 15.84 kB | 15.84 kB | | jest-docblock | 29.7.0 | 8.99 kB | 12.76 kB | | crypto-randomuuid | 1.0.0 | 11.18 kB | 11.18 kB | | ttl-set | 1.0.0 | 4.61 kB | 9.69 kB | | mutexify | 1.4.0 | 5.71 kB | 8.74 kB | | path-to-regexp | 0.1.12 | 6.6 kB | 6.6 kB | | koalas | 1.0.2 | 6.47 kB | 6.47 kB | | module-details-from-path | 1.0.4 | 3.96 kB | 3.96 kB |🤖 This report was automatically generated by heaviest-objects-in-the-universe |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #6321 +/- ##
==========================================
- Coverage 84.06% 83.73% -0.34%
==========================================
Files 382 473 +91
Lines 17269 20023 +2754
==========================================
+ Hits 14518 16767 +2249
- Misses 2751 3256 +505 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
BenchmarksBenchmark execution time: 2025-08-22 21:00:53 Comparing candidate commit 08c4748 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 1262 metrics, 61 unstable metrics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change itself is LGTM, while I would love to have a test case for this to not potentially regress in the future anymore. I suggest to also include the configuration issue that we found (that is not breaking anything, it is just a weirdly defined).
How would you test this? |
@@ -274,9 +274,10 @@ function startGCObserver () { | |||
gcObserver = new PerformanceObserver(list => { | |||
for (const entry of list.getEntries()) { | |||
const type = gcType(entry.detail?.kind || entry.kind) | |||
const duration = entry.duration * 1_000_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entry is actually returned as milliseconds, not nanoseconds. It is a double with the fractional part representing the higher accuracy.
We do have wrong time metrics in cpu measurements. These are divided with the wrong values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand, if we expect nanoseconds, and we receive milliseconds, wouldn't multiplying them by a million give us the right number? (worth noting I didn't bother flooring because metrics are doubles)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, we should have better metric names that include the correct type. You're correct!
https://github.com/DataDog/dogweb/blob/prod/integration/node/node_metadata.csv
I am looking into writing tests for it and figuring out more issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I will open a follow-up PR where I fix the event loop utilization and I am trying to come up with tests.
* fix gc pause metric using millis instead of nanos * remove object support for gc option which was unintended
* fix gc pause metric using millis instead of nanos * remove object support for gc option which was unintended
What does this PR do?
Fix GC pause metric using milliseconds instead of nanoseconds.
Motivation
This made the metric plummet to 0 or close to it as the precision was incorrect.