-
Notifications
You must be signed in to change notification settings - Fork 207
Closed
Labels
waiting-for-feedbackwaiting for feedbackwaiting for feedback
Description
What happened:
When a pod's endpoint changes (e.g., pod restart, scaling events), the Haproxy Ingress Controller reloads completely, causing all Ingress resources to be unavailable.
What you expected to happen:
only the endpoint information should update dynamically without triggering a full reload of the Ingress Controller.
How to reproduce this issue:
- Create an ingress
echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tenant-bbb
namespace: default
annotations:
haproxy.org/ssl-passthrough: "true"
spec:
ingressClassName: haproxy
rules:
- host: xxx.xxx.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tenant-bbb
port:
number: 6443 # Forward HTTPS traffic directly
" | kubectl apply -f -
- Restart the backend pod (which changes the endpoint)
kubectl delete pod <backend-pod>
- Observe that Ingress Controller reloads completely, impacting all Ingress resources.
2025/03/10 06:23:23 INFO handler/https.go:199 [transactionID=bea73b1d-bdcd-4a83-9def-d8e0f823b039] reload required : SSLPassthrough disabled
2025/03/10 06:23:23 INFO handler/refresh.go:43 [transactionID=bea73b1d-bdcd-4a83-9def-d8e0f823b039] reload required : some backends are deleted
[NOTICE] (69) : Reloading HAProxy
[NOTICE] (69) : Initializing new worker (150)
[NOTICE] (69) : Loading success.
[WARNING] (131) : Proxy healthz stopped (cumulated conns: FE: 38, BE: 0).
[WARNING] (131) : Proxy http stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy ssl stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy stats stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy https stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy {xxx} stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy {xxx} stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy {xxx} stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy haproxy-controller_default-local-service_http stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy haproxy-controller_prometheus_http stopped (cumulated conns: FE: 0, BE: 0).
[WARNING] (131) : Proxy ssl stopped (cumulated conns: FE: 0, BE: 0).
2025/03/10 06:23:23 INFO controller/controller.go:209 [transactionID=bea73b1d-bdcd-4a83-9def-d8e0f823b039] HAProxy reloaded
[NOTICE] (69) : haproxy version is 3.1.5-076df02
[WARNING] (69) : Former worker (131) exited with
- there will be no ingress resource.
kubectl get ing
Anything else we need to know:
This behavior significantly impacts multi-tenant environments where multiple Ingress resources rely on the same Ingress Controller.
Recovering all Ingress resources every time an endpoint changes leads to downtime and increased latency.
By the way, I found out adding a finalizer into ingress resolves the issue. But I am not sure if it is proper.
Metadata
Metadata
Assignees
Labels
waiting-for-feedbackwaiting for feedbackwaiting for feedback