-
Notifications
You must be signed in to change notification settings - Fork 995
Closed
Labels
Description
MetalLB Version
0.14.3
Deployment method
Charts
Main CNI
calico
Kubernetes Version
1.27.3
Cluster Distribution
kubeadm
Describe the bug
With 0.13.12
one of the speaker pods logs the following:
{"caller":"main.go:374","event":"serviceAnnounced","ips":["10.20.10.190"],"level":"info","msg":"service has IP, announcing","pool":"default","protocol":"layer2","ts":"2024-02-04T11:13:50Z"}
with 0.14.3
speaker pods do not contain such log message and ARP responses are not being sent. arping
stops working
Here's my config:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
creationTimestamp: null
name: default
namespace: metallb-system
spec:
addresses:
- 10.20.10.190-10.20.10.190
status: {}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
creationTimestamp: null
name: l2advertisement1
namespace: metallb-system
spec:
ipAddressPools:
- default
status: {}
To Reproduce
update from 0.13.12 to 0.14.3 use provided config
Expected Behavior
speaker pods continue to respond to ARP requests
Additional Context
I have 2 clusters, production and development, both exhibit same behavior after upgrade.
I've seen only one error in logs, but it seems to be related to legacy AddressPools:
{"level":"error","ts":"2024-02-04T13:20:38Z","logger":"cert-rotation","msg":"Webhook not found. Unable to update certificate.","name":"addresspools.metallb.io","gvk":"apiextensions.k8s.io/v1, Kind=CustomResourceDefinition","error":"CustomResourceDefinition.apiextensions.k8s.io \"addresspools.metallb.io\" not found","stacktrace":"github.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).ensureCerts\n\t/bitnami/blacksmith-sandox/metallb-0.14.3/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.1/pkg/rotator/rotator.go:816\ngithub.com/open-policy-agent/cert-controller/pkg/rotator.(*ReconcileWH).Reconcile\n\t/bitnami/blacksmith-sandox/metallb-0.14.3/pkg/mod/github.com/open-policy-agent/cert-controller@v0.10.1/pkg/rotator/rotator.go:785\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/bitnami/blacksmith-sandox/metallb-0.14.3/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/bitnami/blacksmith-sandox/metallb-0.14.3/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/bitnami/blacksmith-sandox/metallb-0.14.3/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/bitnami/blacksmith-sandox/metallb-0.14.3/pkg/mod/sigs.k8s.io/controller-runtime@v0.16.3/pkg/internal/controller/controller.go:227"}
I've read and agree with the following
- I've checked all open and closed issues and my request is not there.
- I've checked all open and closed pull requests and my request is not there.
I've read and agree with the following
- I've checked all open and closed issues and my issue is not there.
- This bug is reproducible when deploying MetalLB from the main branch
- I have read the troubleshooting guide and I am still not able to make it work
- I checked the logs and MetalLB is not discarding the configuration as not valid
- I enabled the debug logs, collected the information required from the cluster using the collect script and will attach them to the issue
- I will provide the definition of my service and the related endpoint slices and attach them to this issue
kevinastone