-
Notifications
You must be signed in to change notification settings - Fork 21
Description
Describe the bug
I have hit a strange situation when tinkering with slo-exporter for the first time. I suspect processMatrixResultAsIncrease calculates prometheus metric increases incorrectly when the query interval is shorter than the interval data is updated at prometheus. I have been manually replaying queries reported by the debug-level log and I get no result from prometheus most of the time. On the other hand I run a loop that hits the foo service with several requests each second. Strangely I see no events emitted even when prometheus finally updates the values.
I suspect this assignment erases any previously collected samples when prometheus answers with no results. When the ingester finally collects something then there is nothing to compare it to. I suppose the correct way to update previousResult
would be to merge it somehow, e.g.:
func (r *queryResult) update(newResult queryResult) {
r.timestamp = newResult.timestamp
for k, v := range newResult.metrics {
r.metrics[k] = v
}
}
How to reproduce the bug
I run prometheusIngester with these settings
modules:
prometheusIngester:
apiUrl: "http://127.0.0.1:9090"
queryTimeout: 30s
queries:
- type: counter_increase
query: 'istio_requests_total{destination_canonical_service="foo"}'
interval: 5s
Then I saw a neverending list of messages like this:
time="2021-10-06T18:02:42.269+02:00" level=debug msg="executed query" component=prometheusIngester duration=7.28613ms query="istio_requests_total{destination_canonical_service=\"foo\"}[5s]" timestamp="2021-10-06 18:02:42.261701125 +0200 CEST m=+180.026099376"
Expected behavior
I expect messages from sloEventProducer etc.
Additional context
None