-
Notifications
You must be signed in to change notification settings - Fork 119
Fix linstor metrics node label #1335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @kvaps, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses an issue with Linstor metrics labeling in the Prometheus scrape configuration. It corrects the targetLabel
assignments for both linstor-controller
and linstor-satellite
metrics within the podscrape.yaml
template. The change ensures that controller metrics are correctly associated with a generic node
label, while satellite metrics are accurately linked to a controller_node
label, improving the clarity and correctness of collected monitoring data.
Highlights
- Linstor Controller Metrics Label Correction: The
targetLabel
forlinstor-controller
metrics has been updated fromcontroller_node
tonode
. This ensures that the Prometheus scrape configuration correctly assigns the genericnode
label to controller-related metrics, aligning with standard node identification. - Linstor Satellite Metrics Label Adjustment: The
targetLabel
forlinstor-satellite
metrics has been changed fromnode
tocontroller_node
. This adjustment ensures that satellite-related metrics are correctly labeled withcontroller_node
, distinguishing them from general node metrics and providing more precise context.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Successfully created backport PR for |
# Description Backport of #1335 to `release-0.34`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly swaps the targetLabel
for node metrics between the linstor-satellite
and linstor-controller
VMPodScrape
configurations. This change ensures that metrics from satellite pods are labeled with the node
they run on, which is essential for the Grafana dashboard queries to work as intended.
While this change is correct, I've noticed a couple of related inconsistencies that might be worth addressing in a follow-up:
- The
job
labels inpodscrape.yaml
appear to be swapped. Thelinstor-satellite
scrape config setsjob: linstor-controller
, and thelinstor-controller
scrape config setsjob: linstor-satellite
. This is confusing and likely incorrect. - The Grafana dashboard contains a query for
scrape_duration_seconds{job="linstor-node", ...}
. However, based on the provided scrape configurations, neither of them sets thejob
label tolinstor-node
. This panel in the dashboard might not be working correctly.
These issues are outside the scope of the current changes, but fixing them would improve the overall consistency and correctness of the monitoring setup.
Signed-off-by: Andrei Kvapil kvapss@gmail.com
What this PR does
Release note