-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Report
Keda operator is deployed and starts running. Applying the following ScaledObject:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"keda.sh/v1alpha1","kind":"ScaledObject","metadata":{"annotations":{"scaledobject.keda.sh/transfer-hpa-ownership":"true"},"labels":{"app":"apollo","app.kubernetes.io/instance":"apollo"},"name":"apollo","namespace":"apollo"},"spec":{"advanced":{"horizontalPodAutoscalerConfig":{"behavior":{"scaleDown":{"policies":[{"periodSeconds":60,"type":"Pods","value":1}],"stabilizationWindowSeconds":120},"scaleUp":{"policies":[{"periodSeconds":45,"type":"Pods","value":2}],"stabilizationWindowSeconds":60}},"name":"apollo"}},"fallback":{"failureThreshold":3,"replicas":1},"maxReplicaCount":10,"minReplicaCount":1,"scaleTargetRef":{"apiVersion":"argoproj.io/v1alpha1","kind":"Rollout","name":"apollo"},"triggers":[{"metadata":{"adminURL":"{url}","isPartitionedTopic":"false","msgBacklogThreshold":"5","subscription":"apollo","topic":"{topic}"},"type":"pulsar"}]}}
scaledobject.keda.sh/transfer-hpa-ownership: 'true'
creationTimestamp: '2023-12-08T15:59:58Z'
finalizers:
- finalizer.keda.sh
generation: 3
labels:
app: apollo
app.kubernetes.io/instance: apollo
scaledobject.keda.sh/name: apollo
name: apollo
namespace: apollo
resourceVersion: '294765183'
uid: b2a6151b-aa7b-4306-8345-b3a6468fdc0f
spec:
advanced:
horizontalPodAutoscalerConfig:
behavior:
scaleDown:
policies:
- periodSeconds: 60
type: Pods
value: 1
stabilizationWindowSeconds: 120
scaleUp:
policies:
- periodSeconds: 45
type: Pods
value: 2
stabilizationWindowSeconds: 60
name: apollo
scalingModifiers: {}
fallback:
failureThreshold: 3
replicas: 1
maxReplicaCount: 10
minReplicaCount: 1
scaleTargetRef:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
name: apollo
triggers:
- metadata:
adminURL: 'http://{url}'
isPartitionedTopic: 'false'
msgBacklogThreshold: '5'
subscription: apollo
topic: 'persistent://{topic}'
type: pulsar
status:
conditions:
- message: ScaledObject is defined correctly and is ready for scaling
reason: ScaledObjectReady
status: 'True'
type: Ready
- status: Unknown
type: Active
- status: Unknown
type: Fallback
- status: Unknown
type: Paused
externalMetricNames:
- s0-pulsar-persistent---{topic}
hpaName: apollo
originalReplicaCount: 1
scaleTargetGVKR:
group: argoproj.io
kind: Rollout
resource: rollouts
version: v1alpha1
scaleTargetKind: argoproj.io/v1alpha1.Rollout
makes the operator crash showing the following logs:
panic: runtime error: invalid memory address or nil pointer dereference │
│ [signal SIGSEGV: segmentation violation code=0x1 addr=0x78 pc=0x35a419c] │
│ │
│ goroutine 288 [running]: │
│ github.com/kedacore/keda/v2/pkg/scalers.(*pulsarScaler).GetStats(0xc000cb0ab0, {0x49fb048, 0xc0002a54f0}) │
│ /workspace/pkg/scalers/pulsar_scaler.go:247 +0x13c │
│ github.com/kedacore/keda/v2/pkg/scalers.(*pulsarScaler).getMsgBackLog(0xc000cb0ab0, {0x49fb048?, 0xc0002a54f0?}) │
│ /workspace/pkg/scalers/pulsar_scaler.go:287 +0x46 │
│ github.com/kedacore/keda/v2/pkg/scalers.(*pulsarScaler).GetMetricsAndActivity(0xc000cb0ab0, {0x49fb048?, 0xc0002a54f0?}, {0xc0011e40f0, 0x41}) │
│ /workspace/pkg/scalers/pulsar_scaler.go:303 +0x56 │
│ github.com/kedacore/keda/v2/pkg/scaling/cache.(*ScalersCache).GetMetricsAndActivityForScaler(0xc000cff7c0, {0x49fb048, 0xc0002a54f0}, 0x0, {0xc0011e40f0, 0x41}) │
│ /workspace/pkg/scaling/cache/scalers_cache.go:129 +0x191 │
│ github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).getScalerState(0x58, {0x49fb048, 0xc0002a54f0}, {0x49ea470?, 0xc000cb0ab0}, 0xc0006d9b00?, {{0xc000eb9050, 0x6}, {0xc000eb9056, 0x6}, ...}, ...) │
│ /workspace/pkg/scaling/scale_handler.go:699 +0x3dc │
│ github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).getScaledObjectState(0x0?, {0x49fb048, 0xc0002a54f0}, 0xc000743e00) │
│ /workspace/pkg/scaling/scale_handler.go:590 +0x969 │
│ github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers(0xc00032f730, {0x49fb048, 0xc0002a54f0}, {0x42a3720?, 0xc000743e00?}, {0x49e5128, 0xc0011c62a0}) │
│ /workspace/pkg/scaling/scale_handler.go:241 +0x271 │
│ github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop(0x0?, {0x49fb048, 0xc0002a54f0}, 0xc0007048c0, {0x42a3720, 0xc000743e00}, {0x49e5128, 0xc0011c62a0}, 0x0?) │
│ /workspace/pkg/scaling/scale_handler.go:180 +0x446 │
│ created by github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).HandleScalableObject │
│ /workspace/pkg/scaling/scale_handler.go:126 +0x55d
The access to Pulsar needs to be authless so saw no point in specifying it.
Expected Behavior
The ScaledObject to be succesfully submitted and keda operator keeps running.
Actual Behavior
Keda crashes.
Steps to Reproduce the Problem
Submit a ScaledObject with pulsar scaler without providing authModes
parameter.
Logs from KEDA operator
No response
KEDA Version
2.12.0
Kubernetes Version
None
Platform
Amazon Web Services
Scaler Details
Apache Pulsar
Anything else?
Kubernetes v1.28
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working
Type
Projects
Status
Ready To Ship