-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
The pods for my various services fail to start due to failing health checks caused by Cilium dropping traffic with the reason "Missed tail call". My expectation would be that, by default, Cilium wouldn't be dropping the health checks coming from the host to the pods.
Cilium Version
cilium-cli: v0.15.5 compiled with go1.20.7 on darwin/arm64
cilium image (default): v1.14.0
cilium image (stable): v1.14.1
cilium image (running): unknown. Unable to obtain cilium version, no cilium pods found in namespace "kube-system"
Kernel Version
Linux minikube 6.4.11-200.fc38.aarch64 #1 SMP PREEMPT_DYNAMIC Wed Aug 16 18:01:59 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
Kubernetes Version
Client Version: v1.28.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.4
via
minikube start --memory 8000 --cpus 4 --network-plugin=cni --driver=podman
minikube v1.31.2 on Darwin 13.4.1 (arm64)
Sysdump
cilium-sysdump-20230826-081252.zip
Relevant log output
No response
Anything else?
This is a semi-frequent occurrence on my end. When I go and run Flux and have it spin up a set of operators (such as cert-manager, the Flink operator and external secrets), the chance of this happening is (subjectively) around 75%. The best fix that I have is to tear down the Minikube VM and rebuild everything.
Code of Conduct
- I agree to follow this project's Code of Conduct