-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Report
I have a ScaledObject with a New Relic trigger with a query that isn't returning any results. Upon creating this scaler keda-operator pod logs a panic instead of correctly handling no results and returning 0 (or error). Not an authentication issue as other workloads with valid queries scale with no issue. I have found similar issues related to panics on the New Relic Scaler in the past, but none that correlated directly with query execution.
Expected Behavior
Scaler returns 0 or error (if noMetadataError is active)
Actual Behavior
Keda panics and operator enters a CrashLoopBackoff until query scaler is adjusted or removed
Steps to Reproduce the Problem
This is the problematic query:
SELECT latest(MyMetric.value) FROM MyMetric WHERE metricName = 'something' FACET host
which produces such a result in the New Relic UI:
If we remove the faceting, the response is slightly different (and causes no panic):
Logs from KEDA operator
2024/03/21 16:49:09 maxprocs: Updating GOMAXPROCS=2: determined from CPU quota
I0321 16:49:10.103195 1 leaderelection.go:250] attempting to acquire leader lease keda/operator.keda.sh...
I0321 16:49:26.526481 1 leaderelection.go:260] successfully acquired lease keda/operator.keda.sh
panic: runtime error: index out of range [0] with length 0
goroutine 683 [running]:
github.com/kedacore/keda/v2/pkg/scalers.(*newrelicScaler).executeNewRelicQuery(0xc002a96dc0, {0x4906c80?, 0xc004b919a0?})
/workspace/pkg/scalers/newrelic_scaler.go:162 +0x1fb
github.com/kedacore/keda/v2/pkg/scalers.(*newrelicScaler).GetMetricsAndActivity(0xc002a96dc0, {0x4906c80?, 0xc004b919a0?}, {0xc004d65d00, 0xc})
/workspace/pkg/scalers/newrelic_scaler.go:175 +0x4a
github.com/kedacore/keda/v2/pkg/scaling/cache.(*ScalersCache).GetMetricsAndActivityForScaler(0xc002a96e00, {0x4906c80, 0xc004b919a0}, 0x4, {0xc004d65d00, 0xc})
/workspace/pkg/scaling/cache/scalers_cache.go:130 +0x184
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).getScalerState(0xc0000f97a0, {0x4906c80, _}, {_, _}, _, {{0xc004c11190, 0xc}, {0xc004c111a0, 0xc}, ...}, ...)
/workspace/pkg/scaling/scale_handler.go:743 +0x3c6
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).getScaledObjectState.func1({0x48ffaa0?, 0xc002a96dc0?}, 0x8ec8c5?, {{0xc004c11190, 0xc}, {0xc004c111a0, 0xc}, {0x35b3722, 0xc}, 0xb2d05e00, ...}, ...)
/workspace/pkg/scaling/scale_handler.go:628 +0xc5
created by github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).getScaledObjectState in goroutine 677
/workspace/pkg/scaling/scale_handler.go:627 +0x805
KEDA Version
2.13.1
Kubernetes Version
1.26
Platform
Amazon Web Services
Scaler Details
New Relic
Anything else?
Given the stack, I looked into the source code and this line seems to be the origin, as there are no actual results to go through in the original query's scenario (not sure why, lack of support in New Relic's library?); I believe something like this would be an easy fix:
if len(resp.Results) == 0 {
if s.metadata.noDataError {
return 0, fmt.Errorf("query return no results %s", s.metadata.nrql)
}
return 0, nil
}
// Only use the first result from the query, as the query should not be multi row
for _, v := range resp.Results[0] {
val, ok := v.(float64)
if ok {
return val, nil
}
}
...
I'd be happy to open a contribution if indeed it would be a possible way to go
Metadata
Metadata
Assignees
Labels
Type
Projects
Status