-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
I am running cilium with kube-proxy disabled(installed using rke2-1.20.12).
root@a172-25-172-204:/home/cilium# cilium status
KVStore: Ok Disabled
Kubernetes: Ok 1.20 (v1.20.12+rke2r1) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Strict [eth0 (Direct Routing)]
Cilium: Ok OK
NodeMonitor: Listening for events on 4 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 23/255 allocated from 172.23.128.0/24,
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: BPF [eth0] 172.23.128.0/24
Controller Status: 102/102 healthy
Proxy Status: OK, ip 172.23.128.26, 0 redirects active on ports 10000-20000
Hubble: Ok Current/Max Flows: 4096/4096 (100.00%), Flows/s: 34.77 Metrics: Disabled
Cluster health: 4/4 reachable (2022-03-02T05:54:47Z)
root@a172-25-172-204:/home/cilium#
root@172.25.172.204:~# kubectl get pods -A | grep kube-proxy
root@172.25.172.204:~#
I have a global clusterwide deny policy which blocks all ingress to cluster but allows ingress from cluster entities.
---
apiVersion: "cilium.io/v2"
kind: CiliumClusterwideNetworkPolicy
metadata:
name: "global-deny-all"
spec:
description: "block everything coming from outside the cluster"
endpointSelector: {}
ingress:
- fromEntities:
- cluster
I deployed an app, created a service and exposed it using type as loadBalancer.
root@172.25.172.204:~# kubectl get svc -n web
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 172.23.60.147 172.25.172.219 4080:32129/TCP 5h52m
root@172.25.172.204:~#
I opened access to the app from only one IP address by adding the CNP.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: "allow-nginx"
namespace: web
spec:
endpointSelector:
matchLabels:
app: nginx
ingress:
- fromCIDR:
- 198.18.135.71/32
I am expecting that if I try to access the app from a machine outside the cluster different than the allowed machine, it should be blocked. However, I am seeing it getting allowed. My hubble logs are:
root@172.25.172.204:~# kubectl -n kube-system exec cilium-2rp4t -- hubble observe --since 3m --pod web/nginx-695fc7b69-jv4q7
TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
Mar 2 05:49:22.550 172.23.129.66:55166 web/nginx-695fc7b69-jv4q7:80 L3-Only FORWARDED TCP Flags: SYN
Mar 2 05:49:22.550 172.23.129.66:55166 web/nginx-695fc7b69-jv4q7:80 to-endpoint FORWARDED TCP Flags: SYN
Mar 2 05:49:22.550 web/nginx-695fc7b69-jv4q7:80 172.23.129.66:55166 to-overlay FORWARDED TCP Flags: SYN, ACK
Mar 2 05:49:22.553 172.23.129.66:55166 web/nginx-695fc7b69-jv4q7:80 to-endpoint FORWARDED TCP Flags: ACK
Mar 2 05:49:22.553 172.23.129.66:55166 web/nginx-695fc7b69-jv4q7:80 to-endpoint FORWARDED TCP Flags: ACK, PSH
Mar 2 05:49:22.553 web/nginx-695fc7b69-jv4q7:80 172.23.129.66:55166 to-overlay FORWARDED TCP Flags: ACK, PSH
Mar 2 05:49:22.556 172.23.129.66:55166 web/nginx-695fc7b69-jv4q7:80 to-endpoint FORWARDED TCP Flags: ACK, FIN
Mar 2 05:49:22.556 web/nginx-695fc7b69-jv4q7:80 172.23.129.66:55166 to-overlay FORWARDED TCP Flags: ACK, FIN
Mar 2 05:49:22.558 172.23.129.66:55166 web/nginx-695fc7b69-jv4q7:80 to-endpoint FORWARDED TCP Flags: ACK
I sometimes see the policy getting applied correctly, but sometimes it doesn't work as expected. Sometimes I see this issue when I modify the CNP or re-schedule pods on other nodes. I start seeing the unexpected behavior and not sure what I might be missing here.
I had this working fine with kube-proxy enabled. Global deny all always worked as expected. With kube-proxy, I had to set externalTrafficPolicy: local
in service to avoid SNAT and have client IP preserved when evaluating the CNP. I am trying to test it with kube-proxy disabled now to see if CNP works there without requiring the externalTrafficPolicy change.
However, I am not getting expected behavior which I was expecting for global deny all, hence confused as to why this might be happening. Even without any CNP to allow traffic, I still see traffic reaching app and confused as to why global CCNP is not dropping it with kube-proxy disabled.
Cilium Version
Client: 1.9.6 38dd27a 2021-04-20T14:57:11-07:00 go version go1.15.11 linux/amd64
Daemon: 1.9.6 38dd27a 2021-04-20T14:57:11-07:00 go version go1.15.11 linux/amd64
Kernel Version
5.4.158-5.4.7-amd64
Kubernetes Version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.12+rke2r1", GitCommit:"4bf2e32bb2b9fdeea19ff7cdc1fb51fb295ec407", GitTreeState:"clean", BuildDate:"2021-10-28T16:49:46Z", GoVersion:"go1.15.15b5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.12+rke2r1", GitCommit:"4bf2e32bb2b9fdeea19ff7cdc1fb51fb295ec407", GitTreeState:"clean", BuildDate:"2021-10-28T16:49:46Z", GoVersion:"go1.15.15b5", Compiler:"gc", Platform:"linux/amd64"}
Sysdump
No response
Relevant log output
No response
Anything else?
No response
Code of Conduct
- I agree to follow this project's Code of Conduct