Skip to content

CI: K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications #30802

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig MonitorAggregation Checks that monitor aggregation restricts notifications

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision 

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc00012b3e0>: {
        s: "Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision ",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:718

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 5
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Cilium pods: [cilium-4xgn6 cilium-tqqd5]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-vgmvq                  false     false
grafana-bd774d7bd-kjkvf       false     false
prometheus-598dddcc7c-6bzpk   false     false
coredns-86d4d67667-sj828      false     false
test-k8s2-7b4f7b4586-9w9mx    false     false
testclient-2h6gk              false     false
Cilium agent 'cilium-4xgn6': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-tqqd5': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
01:42:40 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig
01:42:40 STEP: Ensuring the namespace kube-system exists
01:42:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
01:42:40 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
01:42:40 STEP: Installing Cilium
01:42:41 STEP: Waiting for Cilium to become ready
01:43:22 STEP: Restarting unmanaged pods coredns-86d4d67667-mgh2j in namespace kube-system
01:43:22 STEP: Validating if Kubernetes DNS is deployed
01:43:22 STEP: Checking if deployment is ready
01:43:22 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
01:43:22 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
01:43:22 STEP: Waiting for Kubernetes DNS to become operational
01:43:22 STEP: Checking if deployment is ready
01:43:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:23 STEP: Checking if deployment is ready
01:43:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:24 STEP: Checking if deployment is ready
01:43:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:25 STEP: Checking if deployment is ready
01:43:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:26 STEP: Checking if deployment is ready
01:43:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
01:43:27 STEP: Checking if deployment is ready
01:43:27 STEP: Checking if kube-dns service is plumbed correctly
01:43:27 STEP: Checking if pods have identity
01:43:27 STEP: Checking if DNS can resolve
01:43:28 STEP: Validating Cilium Installation
01:43:28 STEP: Performing Cilium controllers preflight check
01:43:28 STEP: Performing Cilium health check
01:43:28 STEP: Performing Cilium status preflight check
01:43:28 STEP: Checking whether host EP regenerated
01:43:29 STEP: Performing Cilium service preflight check
01:43:29 STEP: Performing K8s service preflight check
01:43:31 STEP: Waiting for cilium-operator to be ready
01:43:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
01:43:33 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
01:43:35 STEP: Making sure all endpoints are in ready state
01:43:37 STEP: Launching cilium monitor on "cilium-tqqd5"
01:43:37 STEP: Creating namespace 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito
01:43:37 STEP: Deploying demo_ds.yaml in namespace 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito
01:43:38 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.23-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision 
Expected
    <*errors.errorString | 0xc00012b3e0>: {
        s: "Cannot retrieve cilium pod cilium-tqqd5 policy revision: cannot get the revision ",
    }
to be nil
=== Test Finished at 2024-02-16T01:43:48Z====
01:43:48 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
01:43:48 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   test-k8s2-7b4f7b4586-9w9mx         2/2     Running             0          12s     10.0.1.131      k8s2   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-2h6gk                   1/1     Running             0          12s     10.0.1.67       k8s2   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testclient-94g29                   0/1     ContainerCreating   0          13s     <none>          k8s1   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-f4497                       0/2     ContainerCreating   0          13s     <none>          k8s1   <none>           <none>
	 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   testds-vgmvq                       2/2     Running             0          13s     10.0.1.78       k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-bd774d7bd-kjkvf            0/1     Running             0          70s     10.0.0.126      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-598dddcc7c-6bzpk        1/1     Running             0          70s     10.0.0.79       k8s1   <none>           <none>
	 kube-system                                                       cilium-4xgn6                       1/1     Running             0          69s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554c5fd95c-5hbj9   1/1     Running             0          69s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-554c5fd95c-t8xgc   1/1     Running             0          69s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-tqqd5                       1/1     Running             0          69s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-86d4d67667-sj828           1/1     Running             0          28s     10.0.0.206      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-4w2lm                   1/1     Running             0          4m40s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-v6jm5                   1/1     Running             0          2m16s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running             0          5m5s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-xn8sf                 1/1     Running             0          88s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-xxv2m                 1/1     Running             0          88s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-fqbqs               1/1     Running             0          2m7s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-jjlrp               1/1     Running             0          2m7s    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-4xgn6 cilium-tqqd5]
cmd: kubectl exec -n kube-system cilium-4xgn6 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.23 (v1.23.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-1a4105b4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.4, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 203/65535 (0.31%), Flows/s: 4.71   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-16T01:43:30Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-4xgn6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 266        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1af   10.0.1.206   ready   
	 1263       Disabled           Disabled          4275       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::172   10.0.1.67    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 2049       Disabled           Disabled          10241      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::195   10.0.1.131   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 2202       Disabled           Disabled          12176      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::1d2   10.0.1.78    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 3174       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-tqqd5 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.23 (v1.23.17) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Disabled   
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.12 (v1.13.12-1a4105b4)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.134, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 267/65535 (0.41%), Flows/s: 6.41   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2024-02-16T01:43:31Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-tqqd5 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 389        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::e9   10.0.0.61    ready   
	 574        Disabled           Disabled          5642       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::28   10.0.0.206   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 870        Disabled           Disabled          12176      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::29   10.0.0.60    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1120       Disabled           Disabled          3167       k8s:app=grafana                                                                                                                  fd02::57   10.0.0.126   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 1215       Disabled           Disabled          4275       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito   fd02::40   10.0.0.131   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 1736       Disabled           Disabled          17255      k8s:app=prometheus                                                                                                               fd02::e8   10.0.0.79    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 2079       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
01:44:31 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
01:44:31 STEP: Deleting deployment demo_ds.yaml
01:44:32 STEP: Deleting namespace 202402160143k8sdatapathconfigmonitoraggregationchecksthatmonito
01:44:47 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|d7a3eae6_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//660/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//660/artifact/d7a3eae6_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_restricts_notifications.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19//660/artifact/test_results_Cilium-PR-K8s-1.23-kernel-4.19_660_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19/660/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions