-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.area/loadbalancingImpacts load-balancing and Kubernetes service implementationsImpacts load-balancing and Kubernetes service implementationskind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.This was reported by a user in the Cilium community, eg via Slack.sig/policyImpacts whether traffic is allowed or denied based on user-defined policies.Impacts whether traffic is allowed or denied based on user-defined policies.
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
When using Cilium with Geneve DSR, requests from outside the cluster are dropped by CiliumNetworkPolicy even though it’s permitted with fromCIDR
like below.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: ingress-allow
namespace: test
spec:
endpointSelector:
matchLabels:
app.kubernetes.io/name: dsr-geneve-test
ingress:
- fromCIDR:
- 172.20.0.0/16
Command
# Send a request from outside the cluster via the Service(NodePort)
$ curl 172.20.0.3:32759
# No response and timeout
Kubernetes Resources
$ kubectl get all -n test -o wide && kubectl get no -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/dsr-geneve-test-67fb7f94bf-zf4nw 1/1 Running 0 2m4s 10.244.2.49 kind-worker2 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/dsr-geneve-test NodePort 10.96.237.227 <none> 80:32759/TCP 2m4s app.kubernetes.io/name=dsr-geneve-test
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/dsr-geneve-test 1/1 1 1 2m4s testhttpd quay.io/cybozu/testhttpd:0 app.kubernetes.io/name=dsr-geneve-test
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/dsr-geneve-test-67fb7f94bf 1 1 1 2m4s testhttpd quay.io/cybozu/testhttpd:0 app.kubernetes.io/name=dsr-geneve-test,pod-template-hash=67fb7f94bf
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 8m24s v1.27.3 172.20.0.5 <none> Debian GNU/Linux 11 (bullseye) 5.15.119-0515119-generic containerd://1.7.1
kind-worker Ready <none> 8m1s v1.27.3 172.20.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.119-0515119-generic containerd://1.7.1
kind-worker2 Ready <none> 8m1s v1.27.3 172.20.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.119-0515119-generic containerd://1.7.1
kind-worker3 NotReady <none> 8m2s v1.27.3 172.20.0.4 <none> Debian GNU/Linux 11 (bullseye) 5.15.119-0515119-generic containerd://1.7.1
hubble observe
output
Nov 13 07:57:28.786: 172.20.0.1:44776 (ID:16777217) <> test/dsr-geneve-test-67fb7f94bf-zf4nw:8000 (ID:12730) from-overlay FORWARDED (TCP Flags: SYN)
Nov 13 07:57:28.786: 172.20.0.1:44776 (world) <> test/dsr-geneve-test-67fb7f94bf-zf4nw:8000 (ID:12730) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Nov 13 07:57:28.786: 172.20.0.1:44776 (world) <> test/dsr-geneve-test-67fb7f94bf-zf4nw:8000 (ID:12730) Policy denied DROPPED (TCP Flags: SYN)
Other related manifests (e.g. Pod) are these↓
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dsr-geneve-test
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: dsr-geneve-test
template:
metadata:
labels:
app.kubernetes.io/name: dsr-geneve-test
spec:
containers:
- image: quay.io/cybozu/testhttpd:0 # refs: https://github.com/cybozu/neco-containers/tree/main/testhttpd
name: testhttpd
lifecycle:
preStop:
exec:
command: ["sleep", "infinity"]
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dsr-geneve-test
namespace: test
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app.kubernetes.io/name: dsr-geneve-test
Looking at hubble observe output, client’s identity is world
when it’s dropped by the policy, although it’s specified as 16777217
in from-overlay.
It seems that client’s identity is not determined by its IP after from-overlay
output.
Cilium Version
Client: 1.14.3 252a99efbc 2023-10-18T18:21:56+03:00 go version go1.20.10 linux/amd64
Daemon: 1.14.3 252a99efbc 2023-10-18T18:21:56+03:00 go version go1.20.10 linux/amd64
Kernel Version
Linux kind-worker2 5.15.119-0515119-generic #202307080749 SMP Sat Jul 8 07:57:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Kubernetes Version
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:58:30Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-15T00:36:28Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
Sysdump
No response
Relevant log output
# cilium identity list and cilium bpf ipcache list
$ cilium identity list
ID LABELS
1 reserved:host
2 reserved:world
...
16777217 cidr:172.20.0.0/16
reserved:world
$ cilium bpf ipcache list
IP PREFIX/ADDRESS IDENTITY
172.20.0.3/32 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244:3::dd0e/128 identity=2865 encryptkey=0 tunnelendpoint=172.20.0.3
10.244.2.233/32 identity=4 encryptkey=0 tunnelendpoint=0.0.0.0
10.244.3.51/32 identity=4 encryptkey=0 tunnelendpoint=172.20.0.3
fe80::944c:b2ff:fe07:bafc/128 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0
10.244.3.59/32 identity=6 encryptkey=0 tunnelendpoint=172.20.0.3
172.20.0.2/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244::3617/128 identity=6 encryptkey=0 tunnelendpoint=172.20.0.5
fd00:10:244:2::7a4d/128 identity=12730 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244:3::1204/128 identity=53685 encryptkey=0 tunnelendpoint=172.20.0.3
fd00:10:244:3::929d/128 identity=4 encryptkey=0 tunnelendpoint=172.20.0.3
10.244.0.83/32 identity=4 encryptkey=0 tunnelendpoint=172.20.0.5
10.244.3.210/32 identity=53685 encryptkey=0 tunnelendpoint=172.20.0.3
172.20.0.5/32 identity=7 encryptkey=0 tunnelendpoint=0.0.0.0
fc00:c111::3/128 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244::8732/128 identity=4 encryptkey=0 tunnelendpoint=172.20.0.5
10.244.3.54/32 identity=13908 encryptkey=0 tunnelendpoint=172.20.0.3
10.244.3.69/32 identity=53685 encryptkey=0 tunnelendpoint=172.20.0.3
0.0.0.0/0 identity=2 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244:3::9765/128 identity=53685 encryptkey=0 tunnelendpoint=172.20.0.3
fc00:c111::2/128 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0
10.244.0.72/32 identity=6 encryptkey=0 tunnelendpoint=172.20.0.5
10.244.2.49/32 identity=12730 encryptkey=0 tunnelendpoint=0.0.0.0
fc00:c111::5/128 identity=6 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244:2::abe1/128 identity=4 encryptkey=0 tunnelendpoint=0.0.0.0
10.244.2.219/32 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0
172.20.0.0/16 identity=16777217 encryptkey=0 tunnelendpoint=0.0.0.0
fd00:10:244:3::8bdf/128 identity=13908 encryptkey=0 tunnelendpoint=172.20.0.3
fd00:10:244:3::d7d4/128 identity=6 encryptkey=0 tunnelendpoint=172.20.0.3
::/0 identity=2 encryptkey=0 tunnelendpoint=0.0.0.0
10.244.3.21/32 identity=2865 encryptkey=0 tunnelendpoint=172.20.0.3
fd00:10:244:2::e49/128 identity=1 encryptkey=0 tunnelendpoint=0.0.0.0
# cilium config related DSR mode
$ cilium config view |grep dsr
bpf-lb-dsr-dispatch geneve
bpf-lb-mode dsr
Anything else?
My colleague @terassyi and I have been working on this issue and we’ve submitted a PR #29155 .
Code of Conduct
- I agree to follow this project's Code of Conduct
Metadata
Metadata
Assignees
Labels
area/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.area/loadbalancingImpacts load-balancing and Kubernetes service implementationsImpacts load-balancing and Kubernetes service implementationskind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.This was reported by a user in the Cilium community, eg via Slack.sig/policyImpacts whether traffic is allowed or denied based on user-defined policies.Impacts whether traffic is allowed or denied based on user-defined policies.