-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
Limiting Identity-Relevant Labels by setting the labels
value in the cilium-config
ConfigMap breaks CiliumNetworkPolicies which use cluster
in fromEntities
. All traffic is dropped as cilium does not recognize that it originates from in-cluster peers (cluster
).
I suspect that this is caused by the fact that the k8s:io.cilium.k8s.policy.cluster
identity label is not (automatically) set as soon as there are any inclusive label rules.
Given the following CiliumNetworkPolicy
:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: coredns
namespace: kube-system
spec:
endpointSelector:
matchLabels:
k8s-app: coredns
ingress:
# Allow DNS access from any namespace and pod inside this Kubernetes cluster.
- fromEntities:
- cluster
toPorts:
- ports:
- port: "53"
protocol: UDP
- ports:
- port: "53"
protocol: TCP
Scenario 1 - Identity-Relevant Labels are not constrained
Using Cilium's default settings not constraining (limiting) the set of identity-relevant labels, everything works as expected and all in-cluster peers (Pods) can successfully connect to the coredns Pods protected by the above CiliumNetworkPolicy
. In that case the .status.identity.labels
in the coredns' corresponding CiliumEndpoint
object looks like this:
status:
...
identity:
...
labels:
- k8s:app.kubernetes.io/instance=coredns
- k8s:app.kubernetes.io/name=coredns
- >-
k8s:io.cilium.k8s.namespace.labels.config.linkerd.io/admission-webhooks=disabled
- >-
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
- k8s:io.cilium.k8s.policy.cluster=default
- k8s:io.cilium.k8s.policy.serviceaccount=coredns
- k8s:io.kubernetes.pod.namespace=kube-system
- k8s:k8s-app=coredns
Scenario 2 - Identity-Relevant Labels are constrained
Constraining (limiting) the set of identity-relevant labels, by setting labels
in the cilium-config
ConfigMap like this:
...
data:
labels: >-
k8s:app k8s:k8s-app k8s:kubernetes.io/metadata.name
...
Traffic from in-cluster peers (Pods) to the coredns Pods is rejected (dropped) and the .status.identity.labels
in the coredns' corresponding CiliumEndpoint
object looks like this:
status:
...
identity:
...
labels:
- k8s:app.kubernetes.io/instance=coredns
- k8s:app.kubernetes.io/name=coredns
- >-
k8s:io.cilium.k8s.namespace.labels.config.linkerd.io/admission-webhooks=disabled
- >-
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
- k8s:io.kubernetes.pod.namespace=kube-system
- k8s:k8s-app=coredns
The notable differences between scenarios 1 and 2 are that in 2 the labels k8s:io.cilium.k8s.policy.cluster=default
and k8s:io.cilium.k8s.policy.serviceaccount=coredns
are not present in the CiliumEndpoint
.
Workaround
Explicitly allowing k8s:io.cilium.k8s.policy
prefixed labels by adding the following additional inclusive label rules in the cilium-config
ConfigMap (as outlined below) resolved the problem.
...
data:
labels: >-
k8s:app k8s:k8s-app k8s:kubernetes.io/metadata.name k8s:io.cilium.k8s.policy
...
With that setting, bot missing identity labels (k8s:io.cilium.k8s.policy.cluster=default
and k8s:io.cilium.k8s.policy.serviceaccount=coredns
) are set in the corresponding CiliumEndpoint
and in-cluster clients can successfully connect to the coredns Pods.
Expected Behavior
The prefix k8s:io.cilium.k8s.policy
(or maybe even k8s:io.cilium.k8s
) should always be automatically added to the set of identity-relevant labels.
Cilium Version
1.11.1
Kernel Version
- 5.4.156
- 5.10.75
Kubernetes Version
1.21 (EKS)
Sysdump
No response
Relevant log output
No response
Anything else?
No response
Code of Conduct
- I agree to follow this project's Code of Conduct