-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
After upgrading to Cilium v1.13.0 our Cilium agents will not start if using v1.Node annotations with kubernetes host scope IPAM mode. This is happening on new deployments as well.
Context: We want more control over what IPv4 podCIDR range ends up on what host and what router IP Cilium will claim for its host interface and health endpoint within that subnet.
This was possible only (I believe, but please correct me if I'm wrong) by:
Setting in Talos OS:
controllerManager:
extraArgs:
allocate-node-cidrs: false
Setting in Cilium:
ipam:
mode: kubernetes
k8s:
requireIPv4PodCIDR: true
Then during deployment of Cilium run:
kubectl annotate node mynode io.cilium.network.ipv4-cilium-host=10.20.30.1 (example IP)
kubectl annotate node mynode io.cilium.network.ipv4-pod-cidr=10.20.30.0/24 (example CIDR)
However that does not seem to work anymore.
Not mentioned in the release notes, but by looking at the changes in the documentation it seems this is now changed to:
kubectl annotate node mynode network.cilium.io/ipv4-cilium-host=10.20.30.1 (example IP)
kubectl annotate node mynode network.cilium.io/ipv4-pod-cidr=10.20.30.0/24 (example CIDR)
However that still doesn't seem to work. Cilium agents just keep "Waiting for k8s node information" and crashlooping on "required IPv4 PodCIDR not available".
Switching to another IPAM mode works fine.
And staying with ipam: kubernetes
mode but entering some PodCIDR in the v1.Node
resource field spec.podCIDR(s)
also works fine. So the issues seems limited only to using the v1.Node
annotations.
Currently the only way of manually assigning subnets to specific hosts or manually setting the host interface and cillium-health endpoint IP is in host scope IPAM mode using node annotations. As such this bug is preventing us from upgrading to Cilium v1.13.0.
Any clue as to why v1.Node
IPAM annotations are not working anymore but everything else is?
Cilium Version
v1.13.0
Kernel Version
v6.1.12
Kubernetes Version
v1.26.1
Sysdump
No response
Relevant log output
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=testnode1 subsys=k8s
level=warning msg="Waiting for k8s node information" error="required IPv4 PodCIDR not available" subsys=k8s
Anything else?
No response
Code of Conduct
- I agree to follow this project's Code of Conduct