-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[CI] Add metrics summary action #7376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] Add metrics summary action #7376
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #7376 +/- ##
===========================================
+ Coverage 20.65% 96.46% +75.81%
===========================================
Files 201 375 +174
Lines 12834 22958 +10124
===========================================
+ Hits 2651 22147 +19496
+ Misses 9995 613 -9382
- Partials 188 198 +10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Hi @yurishkuro, do you have any idea to test if this action actually works ? |
I don't know why it's not reporting anything, still debugging |
Let's wait to see if it actually reporting |
internal/storage/v1/api/spanstore/spanstoremetrics/read_metrics.go
Outdated
Show resolved
Hide resolved
Ok, nothing is happening :-) |
The run is here https://github.com/jaegertracing/jaeger/actions/runs/16543493108/job/46788110217?pr=7376 Doesn't say if any commands were executed - if they were in a script with |
internal/storage/v1/api/spanstore/spanstoremetrics/read_metrics.go
Outdated
Show resolved
Hide resolved
56e5df3
to
f3d0172
Compare
d1604a5
to
1f4f283
Compare
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Indeed let me try researching it. |
Ah, I see the current workflow is indeed a pull request
I think we would need another separate workflow to upload the comment |
Some thougts
BTW, I appreciate you working on this, it's one of those gnarly issues that are not easy to get right but pay off in the long run as a protection against bugs. |
Yup, I agree with this, the current setup is actually really complex and not intuitive, it will be hard to maintain and develop |
Hi @yurishkuro, I'd propose that we should keep the current work: calculate the diff for every single e2e test, because this is much more easier (it is related to id and other thing as well). Then we can aggregate download all the diff artifact together and make a summary, post a PR comment. |
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Why is my newly-added workflow is not running ? |
It's not running for two reasons:
|
Do you have any idea to test it for now ? |
We have two functions we want to test, one is the processing of the metrics results and generation of a report, and the other is posting that report as a comment. Only the second one requires token permissions. So we can merge the PR with just the first part, still happening in the regular workflows without PR comments. Then you can separately test a comment-posting workflow. And finally we'd add them together. |
Hi @yurishkuro, so I think this pull request will solve the first part, but how can we test the first part actually ? |
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Signed-off-by: pipiland2612 <nguyen.t.dang.minh@gmail.com>
Hi @yurishkuro, can you review this ? |
if python3 ./scripts/e2e/compare_metrics.py --file1 ./.metrics/${{ inputs.snapshot }}.txt --file2 ./.metrics/baseline_${{ inputs.snapshot }}.txt --output ./.metrics/diff_${{ inputs.snapshot }}.txt; then | ||
echo "No differences found in metrics" | ||
else | ||
echo "🛑 Differences found in metrics" | ||
echo "has_diff=true" >> $GITHUB_OUTPUT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling logic has been changed in this PR. Previously, the script would exit with code 1 when differences were found, and the workflow would fail. Now, the script returns exit code 0 for no differences and exit code 1 for differences found, with the has_diff
output variable being set only in the latter case.
This approach works for the current use case, but consider handling other potential failure modes. If the script fails for reasons other than finding differences (e.g., file not found, permission issues), the has_diff
output won't be set, but the step will still succeed due to continue-on-error: true
. This could mask actual errors in the comparison process.
Consider adding more robust error handling to distinguish between "differences found" and "script execution failed for other reasons".
if python3 ./scripts/e2e/compare_metrics.py --file1 ./.metrics/${{ inputs.snapshot }}.txt --file2 ./.metrics/baseline_${{ inputs.snapshot }}.txt --output ./.metrics/diff_${{ inputs.snapshot }}.txt; then | |
echo "No differences found in metrics" | |
else | |
echo "🛑 Differences found in metrics" | |
echo "has_diff=true" >> $GITHUB_OUTPUT | |
exit_code=0 | |
python3 ./scripts/e2e/compare_metrics.py --file1 ./.metrics/${{ inputs.snapshot }}.txt --file2 ./.metrics/baseline_${{ inputs.snapshot }}.txt --output ./.metrics/diff_${{ inputs.snapshot }}.txt || exit_code=$? | |
if [ $exit_code -eq 0 ]; then | |
echo "No differences found in metrics" | |
elif [ $exit_code -eq 1 ]; then | |
echo "🛑 Differences found in metrics" | |
echo "has_diff=true" >> $GITHUB_OUTPUT | |
else | |
echo "❌ Error executing comparison script (exit code: $exit_code)" | |
echo "has_diff=true" >> $GITHUB_OUTPUT | |
exit $exit_code | |
fi |
Spotted by Diamond
Is this helpful? React 👍 or 👎 to let us know.
Which problem is this PR solving?
Description of the changes
How was this change tested?
Checklist
jaeger
:make lint test
jaeger-ui
:npm run lint
andnpm run test