-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed as not planned
Closed as not planned
Copy link
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Description
Test Name
Suite-k8s-1.26.K8sDatapathConfig High-scale IPcache Test ingress policy enforcement
Failure Output
Error: bpf obj get (/sys/fs/bpf/tc/globals/cilium_world_cidrs4): No such file or directory
command terminated with exit code 255
Stack Trace
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Expected command: kubectl exec -n kube-system cilium-bq5vd -- bpftool map update pinned /sys/fs/bpf/tc/globals/cilium_world_cidrs4 key 0 0 0 0 0 0 0 0 value 1
To succeed, but it failed:
Exitcode: 255
Err: exit status 255
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Error: bpf obj get (/sys/fs/bpf/tc/globals/cilium_world_cidrs4): No such file or directory
command terminated with exit code 255
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/helpers/kubectl.go:2907
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 1
⚠️ Number of "level=warning" in logs: 12
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Attempt to remove non-existing IP from ipcache layer
Auto-disabling \
UpdateIdentities: Skipping Delete of a non-existing identity
removing identity not added to the identity manager!
Cilium pods: [cilium-bq5vd cilium-qjx2d]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
coredns-6d97d5ddb-tbhwt false false
Cilium agent 'cilium-bq5vd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-qjx2d': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 22 Failed 0
Standard Error
20:28:35 STEP: Installing Cilium
20:28:37 STEP: Waiting for Cilium to become ready
20:28:49 STEP: Validating if Kubernetes DNS is deployed
20:28:49 STEP: Checking if deployment is ready
20:28:49 STEP: Checking if kube-dns service is plumbed correctly
20:28:49 STEP: Checking if pods have identity
20:28:49 STEP: Checking if DNS can resolve
20:28:54 STEP: Kubernetes DNS is not ready: %!s(<nil>)
20:28:54 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
20:28:54 STEP: Waiting for Kubernetes DNS to become operational
20:28:54 STEP: Checking if deployment is ready
20:28:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
20:28:55 STEP: Checking if deployment is ready
20:28:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
20:28:56 STEP: Checking if deployment is ready
20:28:56 STEP: Checking if kube-dns service is plumbed correctly
20:28:56 STEP: Checking if pods have identity
20:28:56 STEP: Checking if DNS can resolve
20:29:00 STEP: Validating Cilium Installation
20:29:00 STEP: Performing Cilium controllers preflight check
20:29:00 STEP: Performing Cilium health check
20:29:00 STEP: Performing Cilium status preflight check
20:29:00 STEP: Checking whether host EP regenerated
20:29:08 STEP: Performing Cilium service preflight check
20:29:08 STEP: Performing K8s service preflight check
20:29:14 STEP: Waiting for cilium-operator to be ready
20:29:14 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
20:29:14 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
20:29:14 STEP: Making sure all endpoints are in ready state
FAIL: Expected command: kubectl exec -n kube-system cilium-bq5vd -- bpftool map update pinned /sys/fs/bpf/tc/globals/cilium_world_cidrs4 key 0 0 0 0 0 0 0 0 value 1
To succeed, but it failed:
Exitcode: 255
Err: exit status 255
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Error: bpf obj get (/sys/fs/bpf/tc/globals/cilium_world_cidrs4): No such file or directory
command terminated with exit code 255
=== Test Finished at 2023-05-22T20:29:18Z====
20:29:18 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
Resources
- Jenkins URL: https://jenkins.cilium.rocks/job/Cilium-PR-K8s-1.26-kernel-net-next/47/
- ZIP file(s):
Anything else?
No response
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.