Skip to content

CI: K8sDatapathConfig High-scale IPcache Test ingress policy enforcement #25652

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig High-scale IPcache Test ingress policy enforcement

Failure Output

FAIL: Expected command: kubectl exec -n kube-system cilium-qhvwq -- bpftool map update pinned /sys/fs/bpf/tc/globals/cilium_world_cidrs4 key 0 0 0 0 0 0 0 0 value 1 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Expected command: kubectl exec -n kube-system cilium-qhvwq -- bpftool map update pinned /sys/fs/bpf/tc/globals/cilium_world_cidrs4 key 0 0 0 0 0 0 0 0 value 1 
To succeed, but it failed:
Exitcode: 255 
Err: exit status 255
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: bpf obj get (/sys/fs/bpf/tc/globals/cilium_world_cidrs4): No such file or directory
	 command terminated with exit code 255
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/helpers/kubectl.go:2907

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 2
⚠️  Number of "level=warning" in logs: 12
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Attempt to remove non-existing IP from ipcache layer
removing identity not added to the identity manager!
Auto-disabling \
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-qhvwq cilium-xpnjj]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                       Ingress   Egress
coredns-6d97d5ddb-k4z4h   false     false
Cilium agent 'cilium-qhvwq': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-xpnjj': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 22 Failed 0


Standard Error

Click to show.
11:48:05 STEP: Installing Cilium
11:48:07 STEP: Waiting for Cilium to become ready
11:48:19 STEP: Validating if Kubernetes DNS is deployed
11:48:19 STEP: Checking if deployment is ready
11:48:19 STEP: Checking if kube-dns service is plumbed correctly
11:48:19 STEP: Checking if pods have identity
11:48:19 STEP: Checking if DNS can resolve
11:48:24 STEP: Kubernetes DNS is not ready: %!s(<nil>)
11:48:24 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
11:48:25 STEP: Waiting for Kubernetes DNS to become operational
11:48:25 STEP: Checking if deployment is ready
11:48:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:48:26 STEP: Checking if deployment is ready
11:48:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:48:27 STEP: Checking if deployment is ready
11:48:27 STEP: Checking if kube-dns service is plumbed correctly
11:48:27 STEP: Checking if pods have identity
11:48:27 STEP: Checking if DNS can resolve
11:48:30 STEP: Validating Cilium Installation
11:48:30 STEP: Performing Cilium status preflight check
11:48:30 STEP: Performing Cilium health check
11:48:30 STEP: Performing Cilium controllers preflight check
11:48:30 STEP: Checking whether host EP regenerated
11:48:45 STEP: Performing Cilium service preflight check
11:48:45 STEP: Performing K8s service preflight check
11:48:45 STEP: Waiting for cilium-operator to be ready
11:48:45 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
11:48:45 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
11:48:45 STEP: Making sure all endpoints are in ready state
FAIL: Expected command: kubectl exec -n kube-system cilium-qhvwq -- bpftool map update pinned /sys/fs/bpf/tc/globals/cilium_world_cidrs4 key 0 0 0 0 0 0 0 0 value 1 
To succeed, but it failed:
Exitcode: 255 
Err: exit status 255
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: bpf obj get (/sys/fs/bpf/tc/globals/cilium_world_cidrs4): No such file or directory
	 command terminated with exit code 255
	 

=== Test Finished at 2023-05-24T11:48:49Z====
11:48:49 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
11:48:50 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-67ff49cd99-2drnp           0/1     Running   0          53m   10.0.0.3        k8s1   <none>           <none>
	 cilium-monitoring   prometheus-8c7df94b4-pjp8g         1/1     Running   0          53m   10.0.0.162      k8s1   <none>           <none>
	 kube-system         cilium-operator-68c854584b-84w2v   1/1     Running   0          48s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-68c854584b-8vxs6   1/1     Running   0          48s   192.168.56.13   k8s3   <none>           <none>
	 kube-system         cilium-qhvwq                       1/1     Running   0          48s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-xpnjj                       1/1     Running   0          48s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-6d97d5ddb-k4z4h            1/1     Running   0          30s   10.0.0.72       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          61m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          61m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          61m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          61m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-78jbf                 1/1     Running   0          53m   192.168.56.13   k8s3   <none>           <none>
	 kube-system         log-gatherer-94wfw                 1/1     Running   0          53m   192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-9jrwg                 1/1     Running   0          53m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-hptht               1/1     Running   0          54m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-lh895               1/1     Running   0          54m   192.168.56.13   k8s3   <none>           <none>
	 kube-system         registry-adder-r9t6q               1/1     Running   0          54m   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-qhvwq cilium-xpnjj]
cmd: kubectl exec -n kube-system cilium-qhvwq -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [enp0s10 192.168.58.12, enp0s16 192.168.59.15, enp0s3 10.0.2.15, enp0s8 192.168.56.12 (Direct Routing), enp0s9 192.168.57.12]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-0b7c29d1)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 3/254 allocated from 10.0.0.0/24, 
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            BPF
	 Masquerading:            BPF   [enp0s10, enp0s16, enp0s3, enp0s8, enp0s9]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       26/26 healthy
	 Proxy Status:            OK, ip 10.0.0.138, 0 redirects active on ports 10000-20000, Envoy: embedded
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 12589/65535 (19.21%), Flows/s: 359.34   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-05-24T11:48:37Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qhvwq -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                  
	 408        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                      ready   
	                                                            reserved:host                                                                                           
	 883        Disabled           Disabled          112        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          10.0.0.72   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                         
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                             
	                                                            k8s:k8s-app=kube-dns                                                                                    
	 1354       Disabled           Disabled          4          reserved:health                                                                     10.0.0.16   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-xpnjj -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [enp0s10 192.168.58.11, enp0s16 192.168.59.15, enp0s3 10.0.2.15, enp0s8 192.168.56.11 (Direct Routing), enp0s9 192.168.57.11]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-0b7c29d1)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 2/254 allocated from 10.0.1.0/24, 
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            BPF
	 Masquerading:            BPF   [enp0s10, enp0s16, enp0s3, enp0s8, enp0s9]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       22/22 healthy
	 Proxy Status:            OK, ip 10.0.1.162, 0 redirects active on ports 10000-20000, Envoy: embedded
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 44646/65535 (68.13%), Flows/s: 1275.80   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-05-24T11:48:44Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-xpnjj -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                   IPv6   IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                    
	 972        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                        ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                 
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                               
	                                                            reserved:host                                                                             
	 3529       Disabled           Disabled          4          reserved:health                                                      10.0.1.105   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:49:30 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
11:49:30 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|823c75f0_K8sDatapathConfig_High-scale_IPcache_Test_ingress_policy_enforcement.zip]]
11:49:34 STEP: Running AfterAll block for EntireTestsuite K8sDatapathConfig High-scale IPcache


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//133/artifact/823c75f0_K8sDatapathConfig_High-scale_IPcache_Test_ingress_policy_enforcement.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//133/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//133/artifact/test_results_Cilium-PR-K8s-1.26-kernel-net-next_133_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/133/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions