-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Closed
Copy link
Labels
area/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.area/encryptionImpacts encryption support such as IPSec, WireGuard, or kTLS.Impacts encryption support such as IPSec, WireGuard, or kTLS.feature/ipv6Relates to IPv6 protocol supportRelates to IPv6 protocol supportkind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.
Description
Hit the issue that cilium-health was reporting failing IPv6 host netns -> remote pod checks. Closer look revealed that a second reply from the remote health check pod got masqueraded by Cilium's rule:
[302:21728] -A CILIUM_POST_nat ! -s fd02::/120 ! -d fd02::/120 -o cilium_host -m comment --comment "cilium host->cluster masquerade" -j SNAT --to-source fc00:f853:ccd:e793::5
According to @pchaigno the reply should not hit the networking stack, and instead it should have redirected to the cilium_vxlan
bypassing the rule.
To reproduce (taken from https://github.com/cilium/cilium/actions/runs/4024902232/jobs/6917456832):
cd cilium/
./contrib/scripts/kind.sh "" 2 "" "" "iptables" "dual"
helm template --validate ./install/kubernetes/cilium --namespace=kube-system --set preflight.image.useDigest=false --set ipv4NativeRoutingCIDR=10.0.0.0/8 --set ipv6NativeRoutingCIDR=fd02::/112 --set operator.image.repository=quay.io/cilium/operator --set encryption.enabled=true --set ipv6.enabled=true --set hubble.relay.image.useDigest=false --set image.tag=28a0cd6f202b70ab81b8cabd5b5fb3cae6f63210 --set etcd.leaseTTL=30s --set logSystemLoad=true --set debug.enabled=true --set operator.image.tag=28a0cd6f202b70ab81b8cabd5b5fb3cae6f63210 --set bandwidthManager.enabled=false --set hubble.eventBufferCapacity=65535 --set ipam.operator.clusterPoolIPv6PodCIDR=fd02::/112 --set ipv4.enabled=true --set hubble.relay.image.repository=quay.io/cilium/hubble-relay-ci --set preflight.image.repository=quay.io/cilium/cilium-ci --set hubble.listenAddress=:4244 --set bpf.masquerade=false --set enableCnpStatusUpdates=true --set hubble.relay.image.tag=28a0cd6f202b70ab81b8cabd5b5fb3cae6f63210 --set image.useDigest=false --set image.repository=quay.io/cilium/cilium-ci --set pprof.enabled=true --set operator.image.suffix=-ci --set bpf.preallocateMaps=false --set sessionAffinity=false --set hubble.enabled=true --set operator.image.useDigest=false --set debug.verbose=flow --set kubeProxyReplacement=disabled --set tunnel=vxlan --set preflight.image.tag=28a0cd6f202b70ab81b8cabd5b5fb3cae6f63210 --set k8s.requireIPv4PodCIDR=true --set enableCiliumEndpointSlice=true > cilium-173e2a16d47c0211.yaml
k apply -f test/k8s/manifests/ipsec_secret.yaml -n kube-system
k apply -f cilium-173e2a16d47c0211.yaml
Then exec into any cilium-agent pod, and run:
cilium-health status -o json --probe
Metadata
Metadata
Assignees
Labels
area/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.area/encryptionImpacts encryption support such as IPSec, WireGuard, or kTLS.Impacts encryption support such as IPSec, WireGuard, or kTLS.feature/ipv6Relates to IPv6 protocol supportRelates to IPv6 protocol supportkind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.