Skip to content

Conversation

AndrewChubatiuk
Copy link
Contributor

@AndrewChubatiuk AndrewChubatiuk commented Dec 11, 2024

Describe Your Changes

related issues:

with this change suggestion for #3233 is to

  • set maxLabelsPerTimeseries either to 0 or to maximum allowed value
  • set a global scrape configuration label_limit to a default value for scrape configuration
  • override on certain jobs global default for label_limit with a value, that is up to maxLabelsPerTimeseries if it's defined

this will allow avoid propagation of custom labelLimits into tryPush agent's function

Checklist

The following checks are mandatory:

@AndrewChubatiuk AndrewChubatiuk force-pushed the label-limit-scrape-param branch 2 times, most recently from 9a6cd6d to 642d074 Compare March 11, 2025 14:47
@AndrewChubatiuk AndrewChubatiuk changed the title lib/promscrape: added label_limit global and per job scrape param lib/promscrape: added labels_limit global and per job scrape param May 29, 2025
@hagen1778
Copy link
Collaborator

@AndrewChubatiuk kindly pinging you on this

@AndrewChubatiuk AndrewChubatiuk force-pushed the label-limit-scrape-param branch 3 times, most recently from 5250811 to a8d1d74 Compare June 10, 2025 13:30
@AndrewChubatiuk AndrewChubatiuk changed the title lib/promscrape: added labels_limit global and per job scrape param lib/promscrape: added label_limit global and per job scrape param Jun 13, 2025
@AndrewChubatiuk AndrewChubatiuk force-pushed the label-limit-scrape-param branch 2 times, most recently from 602cb6c to f213ad6 Compare June 18, 2025 13:27
@AndrewChubatiuk
Copy link
Contributor Author

@hagen1778 please review it

@hagen1778 hagen1778 requested a review from makasim June 19, 2025 14:18
@AndrewChubatiuk AndrewChubatiuk force-pushed the label-limit-scrape-param branch 3 times, most recently from 36365b0 to 6b2a9f6 Compare June 20, 2025 11:01
@hagen1778 hagen1778 requested a review from makasim June 26, 2025 12:04
@hagen1778 hagen1778 assigned hagen1778 and unassigned f41gh7 Jun 26, 2025
Copy link
Contributor

@makasim makasim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

dst = appendRow(dst, "scrape_timeout_seconds", sw.Config.ScrapeTimeout.Seconds(), timestamp)
dst = appendRow(dst, "up", float64(am.up), timestamp)

wc.addRows(sw.Config, dst, timestamp, false)
_ = wc.tryAddRows(sw.Config, dst, timestamp, false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's impossible case, please add assert: logger.Fatalf("BUG: tryAddRows cannot return error for autoMetrics.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like, it's possible to breach this limit and get panic.

There is a problem, we can either: apply labels limit to auto-metrics as well and log it as error. Or ignore labels limit for it.

I think we should go with 1 option. Next hope storage can apply own limits and drop auto metrics as well.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply labels limit to auto-metrics as well and log it as error

did this. see updated

Copy link
Contributor

@f41gh7 f41gh7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, please see minor comments

@hagen1778 hagen1778 force-pushed the label-limit-scrape-param branch from cb24497 to 3eaebd0 Compare July 3, 2025 12:38
@hagen1778 hagen1778 requested a review from f41gh7 July 3, 2025 12:41
AndrewChubatiuk and others added 11 commits July 4, 2025 09:04
Signed-off-by: hagen1778 <roman@victoriametrics.com>
* remove double update of `scrapesSkippedByLabelLimit` on L:1032
* make processing of label_limit and sample_limit alike
* process error on stale series send, as it would give at least some hints
on what's going on. Before, the stale series will be just silently dropped.

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
…for consistency with `scrape_samples_limit`

Signed-off-by: hagen1778 <roman@victoriametrics.com>
…ted metrics

Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Co-authored-by: Max Kotliar <kotlyar.maksim@gmail.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
Signed-off-by: hagen1778 <roman@victoriametrics.com>
limit of labels amount can be exceeded when adding auto metrics, so it is a bad idea to panic here

Signed-off-by: hagen1778 <roman@victoriametrics.com>
@hagen1778 hagen1778 force-pushed the label-limit-scrape-param branch from d999488 to 6771c84 Compare July 4, 2025 07:05
Copy link
Contributor

@f41gh7 f41gh7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Co-authored-by: Andrii Chubatiuk <achubatiuk@victoriametrics.com>
@f41gh7 f41gh7 merged commit f085940 into master Jul 4, 2025
8 checks passed
@f41gh7 f41gh7 deleted the label-limit-scrape-param branch July 4, 2025 08:18
@f41gh7
Copy link
Contributor

f41gh7 commented Jul 4, 2025

Thanks for contribution!

f41gh7 pushed a commit that referenced this pull request Jul 4, 2025
)

  This commit introduces new scrape config option - `label_limit`. Which defines labels count limit for time series.
Whole scrape will be discarded In case of limit breach.
 This option could also be defined with label `__label_limit__` during scrape relabeling process and globally for all scrape targets.

Fixes #7660 and #3233
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

promscrape: support label_limit in scrape config on job level Ability to set maxLabelsPerTimeseries for individual scrape jobs
4 participants