Skip to content

Conversation

lamida
Copy link
Contributor

@lamida lamida commented Mar 13, 2025

What this PR does

Which issue(s) this PR fixes or relates to

Fixes #

Checklist

  • Tests updated.
  • Documentation added.
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX].
  • about-versioning.md updated with experimental features.

@lamida lamida mentioned this pull request Mar 13, 2025
@lamida lamida marked this pull request as ready for review March 13, 2025 18:35
@lamida lamida requested a review from a team as a code owner March 13, 2025 18:35
@lamida lamida requested a review from charleskorn March 14, 2025 03:09
@lamida lamida enabled auto-merge (squash) March 14, 2025 03:13
@lamida
Copy link
Contributor Author

lamida commented Mar 14, 2025

@charleskorn @jhesketh thanks for the review. The implementation is following this wikipedia formula but also I try to follow the Prometheus upstream implementation.

image

Several things to note are the following points:

  1. When we initialize b0 which depends on where are samples between head or tail of the buffer instead of just doing b0=x1-x0. But this is pretty straightforward.
  2. Most comments and concerns are around this point: When we start calculate bt. In prometheus as I mentioned in some comments above, bt is calculated only from 3rd sample onward.

Seeing the test are passed for both upstream and "ours" test, I am pretty sure the implementation is already correct. Please suggest some test data that can prove this wrong.

I also inline the calcTrend call to make it look clearer here: 1a82d9a

@lamida
Copy link
Contributor Author

lamida commented Mar 14, 2025

@jhesketh replying #10873 (comment) (not sure why I can't reply inline there.

Can you also please add a test for when there are only native histograms?
There won't be any annotation emitted if samples are only native histogram.

The operator will just return.

	if (len(fHead) + len(fTail)) < 2 {
		return 0, false, nil, nil
	}

So I don't think there will be any annotation we can test there. We can test for no annotation emitted but I believe it will be pointless.

Please let me know if I understand you incorrectly.

@charleskorn
Copy link
Contributor

So I don't think there will be any annotation we can test there. We can test for no annotation emitted but I believe it will be pointless.

I think it'd be worthwhile adding a test for this: this codifies that we expect that no annotation is returned when only native histograms are selected.

@jhesketh
Copy link
Contributor

@jhesketh replying #10873 (comment) (not sure why I can't reply inline there.

Can you also please add a test for when there are only native histograms?
There won't be any annotation emitted if samples are only native histogram.

The operator will just return.

	if (len(fHead) + len(fTail)) < 2 {
		return 0, false, nil, nil
	}

So I don't think there will be any annotation we can test there. We can test for no annotation emitted but I believe it will be pointless.

Please let me know if I understand you incorrectly.

Yes, that was my intention. That we check we have the same annotations, or lack-thereof, as Prometheus. This can be one as a .test file.

@lamida
Copy link
Contributor Author

lamida commented Mar 18, 2025

@jhesketh added test 8826a7f

@lamida lamida requested review from jhesketh and charleskorn March 18, 2025 07:51
lamida added 10 commits March 26, 2025 16:15
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
…smoothing

Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
lamida added 5 commits March 27, 2025 11:31
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
@lamida lamida requested review from charleskorn and jhesketh March 27, 2025 03:53
lamida added 2 commits March 27, 2025 12:17
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
// Test each label individually to catch edge cases in with single series
labelCombinations := testutils.Combinations(labelsToUse, 1)

// We tried to have this test with 2 labels, but it was failing due to the inconsistent ordering of prometheus processing matchers that result in multiples series, e.g series{label=~"(c|e)"}.
// Prometheus might process series c first or e first which will trigger different validation errors for second and third parameter of double_exponential_smoothing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's helpful to explain why it gives a different validation error as you explained to me earlier.
It's not a different argument to the function, it's a different value in the range vector.
That is because the range vector of the series being computed against, values are skipped for the native histograms until it gets to a value where it has a float. That aligns with a different scalar value for the argument and thus gives a different error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Effectively it is the different argument that is passed. But there is a part of how native histogram being skipped too. I added your point here da3a0a3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me why, if we might get different errors, we skip comparing the values - shouldn't we skip comparing the errors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we get validation error, Value will be nil and we got panic. We still want to compare the error to ensure the validation is correct. More detail discussion is in this comment: https://github.com/grafana/mimir/pull/10873/files/b1eb614926072c1a01437ddd2e6f53b176459960..aaebaa28df8302ecae7669f0ee65be795718831e#r2015549294

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If result.Err is not nil, then shouldn't result.Value always be nil?

If so, then I think we can drop skipValueComparison and modify RequireEqualResults to ensure that both results have the same error (or both don't have an error), and if both have the same error, then both Values should be nil.

Copy link
Contributor Author

@lamida lamida Mar 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still thinking the original check to just return and skip Value comparison if they are nil is the simplest.

But let me attempt to still address this suggestion.

to ensure that both results have the same error (or both don't have an error)

We can achieve just using the existing first check

	require.Equal(t, expected.Err, actual.Err)

and if both have the same error, then both Values should be nil.

	if expected.Err != nil && actual.Err != nil && expected.Err.Error() == actual.Err.Error() {
		require.Nil(t, expected.Value)
		require.Nil(t, actual.Value)
		return
	}

The proposed change is already applied in the last commit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: I can't use errors.Is(expected.Err, actual.Err). It won't check the error as the same although the error string is same.

lamida added 3 commits March 27, 2025 12:37
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
@lamida
Copy link
Contributor Author

lamida commented Mar 27, 2025

@jhesketh

I think it would be more useful to have a flag to RequireEqualResults for skipValueComparisonOnError, so we only skip it if we've also gotten an error and then pass that through from the relevant test (if that makes sense?)

A bit large changes, but this should address your feedback: 611e1b1

Signed-off-by: Jon Kartago Lamida <me@lamida.net>
args args
wantFloat float64
wantHasFloat bool
wantErr bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wantErr is never used - should we just remove it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed 18188b0

)

func Test_runDoubleExponentialSmoothing(t *testing.T) {
type args struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit] Why not just inline these fields into the struct below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make it little clearer which one is arguments and which one are wants. I think I will just keep it this way.

// Test each label individually to catch edge cases in with single series
labelCombinations := testutils.Combinations(labelsToUse, 1)

// We tried to have this test with 2 labels, but it was failing due to the inconsistent ordering of prometheus processing matchers that result in multiples series, e.g series{label=~"(c|e)"}.
// Prometheus might process series c first or e first which will trigger different validation errors for second and third parameter of double_exponential_smoothing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me why, if we might get different errors, we skip comparing the values - shouldn't we skip comparing the errors?

Signed-off-by: Jon Kartago Lamida <me@lamida.net>
lamida added 6 commits March 27, 2025 14:14
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
This reverts commit 611e1b1.

Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
Signed-off-by: Jon Kartago Lamida <me@lamida.net>
@lamida lamida requested review from jhesketh and charleskorn March 27, 2025 07:13
Copy link
Contributor

@charleskorn charleskorn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo suggestion below

@@ -20,6 +20,13 @@ import (
// It's possible that floating point values are slightly different due to imprecision, but require.Equal doesn't allow us to set an allowable difference.
func RequireEqualResults(t testing.TB, expr string, expected, actual *promql.Result, skipAnnotationComparison bool) {
require.Equal(t, expected.Err, actual.Err)

if expected.Err != nil && actual.Err != nil && expected.Err.Error() == actual.Err.Error() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be sufficient:

Suggested change
if expected.Err != nil && actual.Err != nil && expected.Err.Error() == actual.Err.Error() {
if expected.Err != nil {

We've already checked that expected.Err and actual.Err are equal above, so we only need to check if there's an error.

@lamida lamida merged commit 187f406 into main Mar 27, 2025
26 checks passed
@lamida lamida deleted the lamida/mqe-double-exponential-smoothing branch March 27, 2025 09:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants