Skip to content

CI: K8sServicesTest Checks graceful termination of service endpoints Checks client terminates gracefully on service endpoint deletion #18318

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sServicesTest Checks graceful termination of service endpoints Checks client terminates gracefully on service endpoint deletion

Failure Output

FAIL: Timed out after 60.000s.

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Timed out after 60.000s.
[exiting on graceful termination] is not in the output after timeout

Expected
    <bool>: false
to be true
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/Services.go:1688

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
BPF masquerade requires NodePort (--enable-node-port=\
Cilium pods: [cilium-528xt cilium-l2bkf]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod   Ingress   Egress
Cilium agent 'cilium-528xt': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
Cilium agent 'cilium-l2bkf': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 24 Failed 0
Cilium pods: [cilium-528xt cilium-l2bkf]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod   Ingress   Egress
Cilium agent 'cilium-528xt': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
Cilium agent 'cilium-l2bkf': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 24 Failed 0


Standard Error

Click to show.
18:08:29 STEP: Running BeforeAll block for EntireTestsuite K8sServicesTest Checks graceful termination of service endpoints
18:08:29 STEP: Installing Cilium
18:08:31 STEP: Waiting for Cilium to become ready
18:08:42 STEP: Validating if Kubernetes DNS is deployed
18:08:42 STEP: Checking if deployment is ready
18:08:42 STEP: Checking if kube-dns service is plumbed correctly
18:08:42 STEP: Checking if pods have identity
18:08:42 STEP: Checking if DNS can resolve
18:08:43 STEP: Kubernetes DNS is not ready: %!s(<nil>)
18:08:43 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
18:08:48 STEP: Waiting for Kubernetes DNS to become operational
18:08:48 STEP: Checking if deployment is ready
18:08:48 STEP: Checking if kube-dns service is plumbed correctly
18:08:48 STEP: Checking if pods have identity
18:08:48 STEP: Checking if DNS can resolve
18:08:49 STEP: Validating Cilium Installation
18:08:49 STEP: Performing Cilium controllers preflight check
18:08:49 STEP: Performing Cilium status preflight check
18:08:49 STEP: Performing Cilium health check
18:08:50 STEP: Performing Cilium service preflight check
18:08:50 STEP: Performing K8s service preflight check
18:08:51 STEP: Waiting for cilium-operator to be ready
18:08:51 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:08:51 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:08:52 STEP: Running BeforeEach block for EntireTestsuite K8sServicesTest Checks graceful termination of service endpoints
18:08:52 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-server")
18:08:55 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-server") => <nil>
18:08:56 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-client")
18:08:56 STEP: WaitforPods(namespace="default", filter="-l app=graceful-term-client") => <nil>
18:08:57 STEP: Deleting service endpoint pod app=graceful-term-server
18:08:57 STEP: Waiting until server is terminating
18:08:58 STEP: Checking if client pod terminated gracefully
FAIL: Timed out after 60.000s.
[exiting on graceful termination] is not in the output after timeout

Expected
    <bool>: false
to be true
=== Test Finished at 2021-12-20T18:09:58Z====
18:09:58 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
18:09:58 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest Checks graceful termination of service endpoints
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS      AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-gfcvs           1/1     Running   0             60m   10.0.0.20       k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-g6sff        1/1     Running   0             60m   10.0.0.163      k8s2   <none>           <none>
	 default             graceful-term-client               1/1     Running   1 (62s ago)   67s   10.0.0.199      k8s2   <none>           <none>
	 default             testclient-blqr5                   1/1     Running   0             67s   10.0.0.124      k8s2   <none>           <none>
	 default             testclient-m2cq2                   1/1     Running   0             67s   10.0.1.159      k8s1   <none>           <none>
	 kube-system         cilium-528xt                       1/1     Running   0             88s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-l2bkf                       1/1     Running   0             88s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-5dc846fff8-njncz   1/1     Running   0             88s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-5dc846fff8-pnrsr   1/1     Running   0             88s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-69b675786c-tbpw8           1/1     Running   0             76s   10.0.0.231      k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0             63m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0             63m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0             63m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-rdj7m                   1/1     Running   0             61m   192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-xfbkx                   1/1     Running   0             62m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0             63m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-d7mpq                 1/1     Running   0             60m   192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-rkd5x                 1/1     Running   0             60m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-55nsw               1/1     Running   0             61m   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-h2lwx               1/1     Running   0             61m   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-528xt cilium-l2bkf]
cmd: kubectl exec -n kube-system cilium-528xt -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.0.231:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.0.231:9153      
	 4    10.98.245.208:3000   ClusterIP      1 => 10.0.0.20:3000       
	 5    10.96.191.188:9090   ClusterIP      1 => 10.0.0.163:9090      
	 6    10.108.6.146:8081    ClusterIP                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-528xt -c cilium-agent -- cilium bpf lb list
Exitcode: 0 
Stdout:
 	 SERVICE ADDRESS      BACKEND ADDRESS
	 10.96.0.10:53        10.0.0.231:53 (2)                         
	                      0.0.0.0:0 (2) [ClusterIP, non-routable]   
	 10.98.245.208:3000   10.0.0.20:3000 (4)                        
	                      0.0.0.0:0 (4) [ClusterIP, non-routable]   
	 10.108.6.146:8081    0.0.0.0:0 (6) [ClusterIP, non-routable]   
	 10.96.0.1:443        192.168.56.11:6443 (1)                    
	                      0.0.0.0:0 (1) [ClusterIP, non-routable]   
	 10.96.0.10:9153      0.0.0.0:0 (3) [ClusterIP, non-routable]   
	                      10.0.0.231:9153 (3)                       
	 10.96.191.188:9090   0.0.0.0:0 (5) [ClusterIP, non-routable]   
	                      10.0.0.163:9090 (5)                       
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l2bkf -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.0.231:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.0.231:9153      
	 4    10.98.245.208:3000   ClusterIP      1 => 10.0.0.20:3000       
	 5    10.96.191.188:9090   ClusterIP      1 => 10.0.0.163:9090      
	 6    10.108.6.146:8081    ClusterIP                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l2bkf -c cilium-agent -- cilium bpf lb list
Exitcode: 0 
Stdout:
 	 SERVICE ADDRESS      BACKEND ADDRESS
	 10.96.0.10:53        10.0.0.231:53 (2)                         
	                      0.0.0.0:0 (2) [ClusterIP, non-routable]   
	 10.96.0.10:9153      0.0.0.0:0 (3) [ClusterIP, non-routable]   
	                      10.0.0.231:9153 (3)                       
	 10.108.6.146:8081    0.0.0.0:0 (6) [ClusterIP, non-routable]   
	 10.98.245.208:3000   10.0.0.20:3000 (4)                        
	                      0.0.0.0:0 (4) [ClusterIP, non-routable]   
	 10.96.191.188:9090   0.0.0.0:0 (5) [ClusterIP, non-routable]   
	                      10.0.0.163:9090 (5)                       
	 10.96.0.1:443        0.0.0.0:0 (1) [ClusterIP, non-routable]   
	                      192.168.56.11:6443 (1)                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
===================== TEST FAILED =====================
18:10:29 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS      AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-gfcvs           1/1     Running   0             61m    10.0.0.20       k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-g6sff        1/1     Running   0             61m    10.0.0.163      k8s2   <none>           <none>
	 default             graceful-term-client               1/1     Running   1 (93s ago)   98s    10.0.0.199      k8s2   <none>           <none>
	 default             testclient-blqr5                   1/1     Running   0             98s    10.0.0.124      k8s2   <none>           <none>
	 default             testclient-m2cq2                   1/1     Running   0             98s    10.0.1.159      k8s1   <none>           <none>
	 kube-system         cilium-528xt                       1/1     Running   0             119s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-l2bkf                       1/1     Running   0             119s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-5dc846fff8-njncz   1/1     Running   0             119s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-5dc846fff8-pnrsr   1/1     Running   0             119s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-69b675786c-tbpw8           1/1     Running   0             107s   10.0.0.231      k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0             63m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0             63m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0             63m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-rdj7m                   1/1     Running   0             61m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-xfbkx                   1/1     Running   0             63m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0             63m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-d7mpq                 1/1     Running   0             61m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-rkd5x                 1/1     Running   0             61m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-55nsw               1/1     Running   0             61m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-h2lwx               1/1     Running   0             61m    192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-528xt cilium-l2bkf]
cmd: kubectl exec -n kube-system cilium-528xt -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.0.231:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.0.231:9153      
	 4    10.98.245.208:3000   ClusterIP      1 => 10.0.0.20:3000       
	 5    10.96.191.188:9090   ClusterIP      1 => 10.0.0.163:9090      
	 6    10.108.6.146:8081    ClusterIP                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-528xt -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                             
	 2          Disabled           Disabled          37799      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::50   10.0.0.124   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=testDSClient                                                                                            
	 81         Disabled           Disabled          4          reserved:health                                                                    fd02::cc   10.0.0.12    ready   
	 142        Disabled           Disabled          31074      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system         fd02::d7   10.0.0.231   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                               
	 880        Disabled           Disabled          15051      k8s:app=grafana                                                                    fd02::b    10.0.0.20    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 1065       Disabled           Disabled          36280      k8s:app=graceful-term-client                                                       fd02::67   10.0.0.199   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	 1473       Disabled           Disabled          32400      k8s:app=prometheus                                                                 fd02::d5   10.0.0.163   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 3625       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                 ready   
	                                                            reserved:host                                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l2bkf -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.96.0.1:443        ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.0.231:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.0.231:9153      
	 4    10.98.245.208:3000   ClusterIP      1 => 10.0.0.20:3000       
	 5    10.96.191.188:9090   ClusterIP      1 => 10.0.0.163:9090      
	 6    10.108.6.146:8081    ClusterIP                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l2bkf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                    
	 217        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                        ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                 
	                                                            k8s:node-role.kubernetes.io/master                                                                        
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                               
	                                                            reserved:host                                                                                             
	 622        Disabled           Disabled          37799      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::194   10.0.1.159   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                   
	 3720       Disabled           Disabled          4          reserved:health                                                          fd02::15f   10.0.1.61    ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
18:10:41 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
18:10:41 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|5aba7ff7_K8sServicesTest_Checks_graceful_termination_of_service_endpoints_Checks_client_terminates_gracefully_on_service_endpoint_deletion.zip]]


ZIP Links:

Click to show.

0a10778f_K8sServicesTest_Checks_graceful_termination_of_service_endpoints_Checks_client_terminates_gracefully_on_service_endpoint_deletion.zip

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//153/artifact/5aba7ff7_K8sServicesTest_Checks_graceful_termination_of_service_endpoints_Checks_client_terminates_gracefully_on_service_endpoint_deletion.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//153/artifact/b3e02c5e_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//153/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.19_153_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19/153/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions