Skip to content

CI: gke-stable (test-gke): Pods are not ready in time: connection refused #17307

@xinyuanzzz

Description

@xinyuanzzz

CI failure

#17147
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6332/

Artifacts:
3d7d5bb0_K8sDatapathConfig_DirectRouting_Check_connectivity_with_direct_routing.zip
test_results_Cilium-PR-K8s-GKE_6332_BDD-Test-PR.zip

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 1
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Unable to discover API groups and resources
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 8
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Deleting no longer present service
Cilium pods: [cilium-2hrrj cilium-bzqlc]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                                                         Ingress   Egress
testclient-pr5pd                                                      
testclient-txdp2                                                      
kube-dns-6c7b8dc9f9-5n4ch                                             
kube-dns-6c7b8dc9f9-bh28z                                             
kube-dns-autoscaler-58cbd4f75c-9tqj6                                  
test-k8s2-79ff876c9d-krpp6                                            
testds-b9xk9                                                          
testds-bwdsr                                                          
event-exporter-gke-67986489c8-h7vll                                   
l7-default-backend-66579f5d7-znk5h                                    
metrics-server-v0.3.6-6c47ffd7d7-58nnw                                
stackdriver-metadata-agent-cluster-level-5f766f4d8b-nvnml             
Cilium agent 'cilium-2hrrj': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 53 Failed 0
Cilium agent 'cilium-bzqlc': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 44 Failed 0

Standard Error

19:42:41 STEP: Installing Cilium
19:42:50 STEP: Waiting for Cilium to become ready
19:43:30 STEP: Validating if Kubernetes DNS is deployed
19:43:30 STEP: Checking if deployment is ready
19:43:30 STEP: Checking if kube-dns service is plumbed correctly
19:43:30 STEP: Checking if pods have identity
19:43:30 STEP: Checking if DNS can resolve
19:43:33 STEP: Kubernetes DNS is not ready: %!s(<nil>)
19:43:33 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
19:44:13 STEP: Waiting for Kubernetes DNS to become operational
19:44:13 STEP: Checking if deployment is ready
19:44:13 STEP: Checking if kube-dns service is plumbed correctly
19:44:13 STEP: Checking if pods have identity
19:44:13 STEP: Checking if DNS can resolve
19:44:16 STEP: Validating Cilium Installation
19:44:16 STEP: Performing Cilium controllers preflight check
19:44:16 STEP: Performing Cilium status preflight check
19:44:16 STEP: Performing Cilium health check
19:44:20 STEP: Performing Cilium service preflight check
19:44:20 STEP: Performing K8s service preflight check
19:44:20 STEP: Waiting for cilium-operator to be ready
19:44:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
19:44:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
19:44:20 STEP: WaitforPods(namespace="kube-system", filter="")
19:44:21 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
19:44:21 STEP: Making sure all endpoints are in ready state
19:44:23 STEP: Creating namespace 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
19:44:23 STEP: Deploying demo_ds.yaml in namespace 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
19:44:27 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
19:44:34 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
19:44:34 STEP: WaitforNPods(namespace="202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith", filter="")
19:48:34 STEP: WaitforNPods(namespace="202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith", filter="") => timed out waiting for pods with filter  to be ready: 4m0s timeout expired
19:48:35 STEP: cmd: kubectl describe pods -n 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith 
Exitcode: 0 
Stdout:
 	 Name:         test-k8s2-79ff876c9d-krpp6
	 Namespace:    202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
	 Priority:     0
	 Node:         gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1/10.128.0.22
	 Start Time:   Fri, 03 Sep 2021 19:44:25 +0000
	 Labels:       pod-template-hash=79ff876c9d
	               zgroup=test-k8s2
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.24.2.4
	 IPs:
	   IP:           10.24.2.4
	 Controlled By:  ReplicaSet/test-k8s2-79ff876c9d
	 Containers:
	   web:
	     Container ID:   containerd://7952015b02d4ef3970f6577e57d80bef28f5adf47ee8dfc89a6551219f244689
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      8080/TCP
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:28 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	   udp:
	     Container ID:   containerd://4934d8771e941b3a6498a5945d20dd327ca4d1d8ca43de9610fd54fa0914667d
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      6969/UDP
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:28 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-552sr:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-552sr
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s2
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age   From                                                Message
	   ----    ------     ----  ----                                                -------
	   Normal  Scheduled  4m9s  default-scheduler                                   Successfully assigned 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith/test-k8s2-79ff876c9d-krpp6 to gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1
	   Normal  Pulled     4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Created container web
	   Normal  Started    4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Started container web
	   Normal  Pulled     4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Created container udp
	   Normal  Started    4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Started container udp
	 
	 
	 Name:         testclient-pr5pd
	 Namespace:    202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
	 Priority:     0
	 Node:         gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1/10.128.0.22
	 Start Time:   Fri, 03 Sep 2021 19:44:25 +0000
	 Labels:       controller-revision-hash=7bd9c4fdbd
	               pod-template-generation=1
	               zgroup=testDSClient
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.24.2.74
	 IPs:
	   IP:           10.24.2.74
	 Controlled By:  DaemonSet/testclient
	 Containers:
	   web:
	     Container ID:  containerd://06ea63ce019c985b5117342c0186928ef01d26578b312724acc916a6e7d3c6e2
	     Image:         docker.io/cilium/demo-client:1.0
	     Image ID:      docker.io/cilium/demo-client@sha256:e38e2e222a6f1abe624b3495effbd00a69d9edcf7e623fe6d385b48bb2a52e6a
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       sleep
	     Args:
	       1000h
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:27 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-552sr:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-552sr
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/not-ready:NoExecute op=Exists
	                  node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/unreachable:NoExecute op=Exists
	                  node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age   From                                                Message
	   ----    ------     ----  ----                                                -------
	   Normal  Scheduled  4m9s  default-scheduler                                   Successfully assigned 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith/testclient-pr5pd to gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1
	   Normal  Pulled     4m7s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Container image "docker.io/cilium/demo-client:1.0" already present on machine
	   Normal  Created    4m7s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Created container web
	   Normal  Started    4m7s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Started container web
	 
	 
	 Name:         testclient-txdp2
	 Namespace:    202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
	 Priority:     0
	 Node:         gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j/10.128.0.21
	 Start Time:   Fri, 03 Sep 2021 19:44:25 +0000
	 Labels:       controller-revision-hash=7bd9c4fdbd
	               pod-template-generation=1
	               zgroup=testDSClient
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.24.1.204
	 IPs:
	   IP:           10.24.1.204
	 Controlled By:  DaemonSet/testclient
	 Containers:
	   web:
	     Container ID:  containerd://52ef9393f52c1cfcdd34eb2834679f6909b37378cb1607fca02640ff248ced69
	     Image:         docker.io/cilium/demo-client:1.0
	     Image ID:      docker.io/cilium/demo-client@sha256:e38e2e222a6f1abe624b3495effbd00a69d9edcf7e623fe6d385b48bb2a52e6a
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       sleep
	     Args:
	       1000h
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:28 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-552sr:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-552sr
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/not-ready:NoExecute op=Exists
	                  node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/unreachable:NoExecute op=Exists
	                  node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age   From                                                Message
	   ----    ------     ----  ----                                                -------
	   Normal  Scheduled  4m9s  default-scheduler                                   Successfully assigned 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith/testclient-txdp2 to gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j
	   Normal  Pulled     4m7s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Container image "docker.io/cilium/demo-client:1.0" already present on machine
	   Normal  Created    4m7s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Created container web
	   Normal  Started    4m6s  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Started container web
	 
	 
	 Name:         testds-b9xk9
	 Namespace:    202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
	 Priority:     0
	 Node:         gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j/10.128.0.21
	 Start Time:   Fri, 03 Sep 2021 19:44:25 +0000
	 Labels:       controller-revision-hash=77f4c499cc
	               pod-template-generation=1
	               zgroup=testDS
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.24.1.1
	 IPs:
	   IP:           10.24.1.1
	 Controlled By:  DaemonSet/testds
	 Containers:
	   web:
	     Container ID:   containerd://27066701967a23965ed29c2233249c4a754515d65e26ca8aa54df4c21b34f994
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:27 +0000
	     Ready:          False
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	   udp:
	     Container ID:   containerd://25d931d18ab6e288804e48e3ac782043c9a0b05b47f0cdf4320f2901a765d57a
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:27 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   default-token-552sr:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-552sr
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node-role.kubernetes.io/master:NoSchedule
	                  node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
	                  node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/not-ready:NoExecute op=Exists
	                  node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/unreachable:NoExecute op=Exists
	                  node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type     Reason     Age                From                                                Message
	   ----     ------     ----               ----                                                -------
	   Normal   Scheduled  4m9s               default-scheduler                                   Successfully assigned 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith/testds-b9xk9 to gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j
	   Normal   Pulled     4m7s               kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal   Created    4m7s               kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Created container web
	   Normal   Started    4m7s               kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Started container web
	   Normal   Pulled     4m7s               kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal   Created    4m7s               kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Created container udp
	   Normal   Started    4m7s               kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Started container udp
	   Warning  Unhealthy  60s (x19 over 4m)  kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j  Readiness probe failed: Get "http://10.24.1.1:80/": dial tcp 10.24.1.1:80: connect: connection refused
	 
	 
	 Name:         testds-bwdsr
	 Namespace:    202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
	 Priority:     0
	 Node:         gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1/10.128.0.22
	 Start Time:   Fri, 03 Sep 2021 19:44:25 +0000
	 Labels:       controller-revision-hash=77f4c499cc
	               pod-template-generation=1
	               zgroup=testDS
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.24.2.199
	 IPs:
	   IP:           10.24.2.199
	 Controlled By:  DaemonSet/testds
	 Containers:
	   web:
	     Container ID:   containerd://a53bf1072b6f883c291dcd752057d1215b5a4e06445daad1b3c1aacc924d9b8a
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:27 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	   udp:
	     Container ID:   containerd://f50a1ad712f44068ce0c204ae386a2153dabac52066c69dd08218953d6c924b6
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Fri, 03 Sep 2021 19:44:28 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-552sr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-552sr:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-552sr
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  <none>
	 Tolerations:     node-role.kubernetes.io/master:NoSchedule
	                  node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
	                  node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/not-ready:NoExecute op=Exists
	                  node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                  node.kubernetes.io/unreachable:NoExecute op=Exists
	                  node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From                                                Message
	   ----    ------     ----   ----                                                -------
	   Normal  Scheduled  4m10s  default-scheduler                                   Successfully assigned 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith/testds-bwdsr to gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1
	   Normal  Pulled     4m8s   kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    4m8s   kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Created container web
	   Normal  Started    4m8s   kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Started container web
	   Normal  Pulled     4m8s   kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    4m8s   kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Created container udp
	   Normal  Started    4m7s   kubelet, gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1  Started container udp
	 
Stderr:
 	 

FAIL: Pods are not ready in time: timed out waiting for pods with filter  to be ready: 4m0s timeout expired
=== Test Finished at 2021-09-03T19:48:35Z====
19:48:35 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
19:48:36 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                                                        READY   STATUS    RESTARTS   AGE     IP            NODE                                        NOMINATED NODE   READINESS GATES
	 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith   test-k8s2-79ff876c9d-krpp6                                  2/2     Running   0          4m15s   10.24.2.4     gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith   testclient-pr5pd                                            1/1     Running   0          4m15s   10.24.2.74    gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith   testclient-txdp2                                            1/1     Running   0          4m15s   10.24.1.204   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith   testds-b9xk9                                                1/2     Running   0          4m15s   10.24.1.1     gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith   testds-bwdsr                                                2/2     Running   0          4m15s   10.24.2.199   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-ss5b4                                     0/1     Running   0          63m     10.24.1.7     gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-bnbrw                                 1/1     Running   0          63m     10.24.1.8     gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       cilium-2hrrj                                                1/1     Running   0          5m50s   10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       cilium-bzqlc                                                1/1     Running   0          5m50s   10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       cilium-node-init-nvcmz                                      1/1     Running   0          5m50s   10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       cilium-node-init-q2ffg                                      1/1     Running   0          5m50s   10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       cilium-operator-bc66bc9c4-4hk5h                             1/1     Running   0          5m50s   10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       cilium-operator-bc66bc9c4-vfckv                             1/1     Running   0          5m50s   10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       event-exporter-gke-67986489c8-h7vll                         2/2     Running   0          62m     10.24.2.249   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       fluentbit-gke-cbpvz                                         2/2     Running   0          65m     10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       fluentbit-gke-d6dtd                                         2/2     Running   0          65m     10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       gke-metrics-agent-lq9qx                                     1/1     Running   0          65m     10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       gke-metrics-agent-q6grb                                     1/1     Running   0          65m     10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       kube-dns-6c7b8dc9f9-5n4ch                                   4/4     Running   0          5m7s    10.24.1.197   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       kube-dns-6c7b8dc9f9-bh28z                                   4/4     Running   0          5m7s    10.24.2.164   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       kube-dns-autoscaler-58cbd4f75c-9tqj6                        1/1     Running   0          62m     10.24.2.206   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j        1/1     Running   0          65m     10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1        1/1     Running   0          65m     10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       l7-default-backend-66579f5d7-znk5h                          1/1     Running   0          62m     10.24.1.10    gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       log-gatherer-f6pcr                                          1/1     Running   0          64m     10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       log-gatherer-n98pn                                          1/1     Running   0          64m     10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       metrics-server-v0.3.6-6c47ffd7d7-58nnw                      2/2     Running   0          62m     10.24.1.38    gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       pdcsi-node-9trm8                                            2/2     Running   0          65m     10.128.0.22   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 kube-system                                                       pdcsi-node-rkbkn                                            2/2     Running   0          65m     10.128.0.21   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-kn7j   <none>           <none>
	 kube-system                                                       stackdriver-metadata-agent-cluster-level-5f766f4d8b-nvnml   2/2     Running   0          62m     10.24.2.118   gke-cilium-ci-9-cilium-ci-9-0b6d1a72-m9c1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2hrrj cilium-bzqlc]
cmd: kubectl exec -n kube-system cilium-2hrrj -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19+ (v1.19.12-gke.2100) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.22 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-c50bca3)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 9/254 allocated from 10.24.2.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [eth0]   10.24.0.0/14 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      53/53 healthy
	 Proxy Status:           OK, ip 10.24.2.145, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 2699/4095 (65.91%), Flows/s: 7.47   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-09-03T19:48:21Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-2hrrj -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                         
	 18         Disabled           Disabled          45831      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.2.74    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith                                
	                                                            k8s:zgroup=testDSClient                                                                                                        
	 346        Disabled           Disabled          4          reserved:health                                                                                          10.24.2.101   ready   
	 357        Enabled            Disabled          17462      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.2.199   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith                                
	                                                            k8s:zgroup=testDS                                                                                                              
	 625        Disabled           Disabled          27159      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.2.4     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith                                
	                                                            k8s:zgroup=test-k8s2                                                                                                           
	 987        Disabled           Disabled          1970       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.2.206   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns-autoscaler                                                                                                
	 1500       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                             ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                 
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                          
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-9                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                   
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                         
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                             
	                                                            k8s:topology.gke.io/zone=us-west1-a                                                                                            
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                     
	                                                            k8s:topology.kubernetes.io/zone=us-west1-a                                                                                     
	                                                            reserved:host                                                                                                                  
	 1733       Disabled           Disabled          23037      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.2.249   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=event-exporter-sa                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=event-exporter                                                                                                     
	                                                            k8s:version=v0.3.4                                                                                                             
	 2497       Disabled           Disabled          34733      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.2.164   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns                                                                                                           
	 3583       Disabled           Disabled          2790       k8s:app=stackdriver-metadata-agent                                                                       10.24.2.118   ready   
	                                                            k8s:cluster-level=true                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                       
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=metadata-agent                                                                         
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-bzqlc -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19+ (v1.19.12-gke.2100) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.21 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-c50bca3)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 7/254 allocated from 10.24.1.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [eth0]   10.24.0.0/14 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      44/44 healthy
	 Proxy Status:           OK, ip 10.24.1.115, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 2125/4095 (51.89%), Flows/s: 5.85   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-09-03T19:47:15Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-bzqlc -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                         
	 17         Disabled           Disabled          61230      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.1.10    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=glbc                                                                                                               
	                                                            k8s:name=glbc                                                                                                                  
	 118        Enabled            Disabled          17462      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.1.1     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith                                
	                                                            k8s:zgroup=testDS                                                                                                              
	 299        Disabled           Disabled          19859      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.1.38    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=metrics-server                                                                         
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=metrics-server                                                                                                     
	                                                            k8s:version=v0.3.6                                                                                                             
	 419        Disabled           Disabled          4          reserved:health                                                                                          10.24.1.253   ready   
	 826        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                             ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                 
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                          
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-9                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                   
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                         
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                             
	                                                            k8s:topology.gke.io/zone=us-west1-a                                                                                            
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                     
	                                                            k8s:topology.kubernetes.io/zone=us-west1-a                                                                                     
	                                                            reserved:host                                                                                                                  
	 837        Disabled           Disabled          45831      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.1.204   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith                                
	                                                            k8s:zgroup=testDSClient                                                                                                        
	 3478       Disabled           Disabled          34733      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.24.1.197   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
19:49:17 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
19:49:17 STEP: Deleting deployment demo_ds.yaml
19:49:27 STEP: Deleting namespace 202109031944k8sdatapathconfigdirectroutingcheckconnectivitywith
19:49:48 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|3d7d5bb0_K8sDatapathConfig_DirectRouting_Check_connectivity_with_direct_routing.zip]]

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions