-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
When applying a NetworkPolicy
or a CiliumNetworkPolicy
which only allows cluster-external traffic to 0.0.0.0/0
, Cilium will continue to allow traffic to cluster-external IPv6 targets.
This behaviour is not present for subnets of 0.0.0.0/0
, like 0.0.0.0/1
or in case an except
rule is present.
Cilium seems to treat 0.0.0.0/0
differently from other prefixes.
NetworkPolicy
allowing IPv6 traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ipv4-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
- to:
- podSelector: {}
- to:
- namespaceSelector: {}
Same NetworkPolicy
with except
rule, blocking IPv6 traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ipv4-except
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 0.0.0.0/8
- to:
- podSelector: {}
- to:
- namespaceSelector: {}
Another NetworkPolicy
not using 0.0.0.0/0
, blocking IPv6 traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ipv4-only-subnet
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/1
- to:
- podSelector: {}
- to:
- namespaceSelector: {}
CiliumNetworkPolicy
allowing IPv6 traffic:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: ipv4-only
spec:
endpointSelector: {}
egress:
- toCIDRSet:
- cidr: 0.0.0.0/0
- toEndpoints:
- {}
- toEntities:
- cluster
Same CiliumNetworkPolicy
with except
rule, blocking IPv6 traffic:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: ipv4-except
spec:
endpointSelector: {}
egress:
- toCIDRSet:
- cidr: 0.0.0.0/0
except:
- 0.0.0.0/8
- toEndpoints:
- {}
- toEntities:
- cluster
Expected bahaviour
I would expect Cilium to treat all IPv4 prefixes the same and not to treat 0.0.0.0/0
differently to allow IPv6 communication as well.
Cilium Version
Client: 1.12.4 6eaecaf 2022-11-16T05:45:01+00:00 go version go1.18.8 linux/amd64
Daemon: 1.12.4 6eaecaf 2022-11-16T05:45:01+00:00 go version go1.18.8 linux/amd64
Kernel Version
Linux k8s0-controleplane0 5.19.0-2-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 5.19.11-1 (2022-09-24) x86_64 x86_64 x86_64 GNU/Linux
Kubernetes Version
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:28:30Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:49:09Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
Sysdump
No response
Relevant log output
No response
Anything else?
cilium status
:
KVStore: Ok Disabled
Kubernetes: Ok 1.25 (v1.25.3) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Strict [internet 192.0.2.1 2001:db8::1 (Direct Routing)]
Host firewall: Enabled [internet]
CNI Chaining: none
Cilium: Ok 1.12.4 (v1.12.4-6eaecaf)
NodeMonitor: Listening for events on 4 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 4/254 allocated from 10.0.2.0/24, IPv6: 4/65534 allocated from 2001:db8:1f::2:0/112
BandwidthManager: EDT with BPF [BBR] [internet]
Host Routing: BPF
Masquerading: BPF [internet] 10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
Controller Status: 30/30 healthy
Proxy Status: OK, ip 10.0.2.231, 0 redirects active on ports 10000-20000
Global Identity Range: min 256, max 65535
Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 45.90 Metrics: Ok
Encryption: Disabled
Cluster health: 3/3 reachable (2022-12-08T00:45:43Z)
Code of Conduct
- I agree to follow this project's Code of Conduct