Skip to content

CI: K8sDatapathConfig Host firewall With native routing #25042

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig Host firewall With native routing

Failure Output

FAIL: Pods are not ready in time: timed out waiting for pods with filter  to be ready: 4m0s timeout expired

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Pods are not ready in time: timed out waiting for pods with filter  to be ready: 4m0s timeout expired
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:652

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 1
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Failed to update lock: Operation cannot be fulfilled on leases.coordination.k8s.io \
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 6
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Disabling socket-LB tracing as it requires kernel 5.7 or newer
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-bp62x cilium-ttb4v]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                       Ingress   Egress
testclient-mhxw4          false     false
testclient-stfn5          false     false
testserver-khvj7          false     false
testserver-ztw2n          false     false
coredns-567b6dd84-4kczc   false     false
Cilium agent 'cilium-bp62x': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-ttb4v': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0


Standard Error

Click to show.
08:18:51 STEP: Installing Cilium
08:18:53 STEP: Waiting for Cilium to become ready
08:19:05 STEP: Validating if Kubernetes DNS is deployed
08:19:05 STEP: Checking if deployment is ready
08:19:05 STEP: Checking if kube-dns service is plumbed correctly
08:19:05 STEP: Checking if DNS can resolve
08:19:05 STEP: Checking if pods have identity
08:19:10 STEP: Kubernetes DNS is not ready: %!s(<nil>)
08:19:10 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
08:19:10 STEP: Waiting for Kubernetes DNS to become operational
08:19:10 STEP: Checking if deployment is ready
08:19:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:19:11 STEP: Checking if deployment is ready
08:19:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:19:12 STEP: Checking if deployment is ready
08:19:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:19:13 STEP: Checking if deployment is ready
08:19:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:19:14 STEP: Checking if deployment is ready
08:19:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
08:19:15 STEP: Checking if deployment is ready
08:19:15 STEP: Checking if kube-dns service is plumbed correctly
08:19:15 STEP: Checking if pods have identity
08:19:15 STEP: Checking if DNS can resolve
08:19:20 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
08:19:20 STEP: Checking if deployment is ready
08:19:20 STEP: Checking if pods have identity
08:19:20 STEP: Checking if kube-dns service is plumbed correctly
08:19:20 STEP: Checking if DNS can resolve
08:19:24 STEP: Validating Cilium Installation
08:19:24 STEP: Performing Cilium controllers preflight check
08:19:24 STEP: Performing Cilium health check
08:19:24 STEP: Performing Cilium status preflight check
08:19:24 STEP: Checking whether host EP regenerated
08:19:31 STEP: Performing Cilium service preflight check
08:19:31 STEP: Performing K8s service preflight check
08:19:37 STEP: Waiting for cilium-operator to be ready
08:19:37 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
08:19:37 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
08:19:37 STEP: Making sure all endpoints are in ready state
08:19:40 STEP: Creating namespace 202304210819k8sdatapathconfighostfirewallwithnativerouting
08:19:40 STEP: Deploying demo_hostfw.yaml in namespace 202304210819k8sdatapathconfighostfirewallwithnativerouting
08:19:40 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
08:19:40 STEP: WaitforNPods(namespace="202304210819k8sdatapathconfighostfirewallwithnativerouting", filter="")
08:23:40 STEP: WaitforNPods(namespace="202304210819k8sdatapathconfighostfirewallwithnativerouting", filter="") => timed out waiting for pods with filter  to be ready: 4m0s timeout expired
08:23:40 STEP: cmd: kubectl describe pods -n 202304210819k8sdatapathconfighostfirewallwithnativerouting 
Exitcode: 0 
Stdout:
 	 Name:             testclient-host-6ncqd
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s2/192.168.56.12
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=789846b94b
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testClientHost
	 Annotations:      <none>
	 Status:           Running
	 IP:               192.168.56.12
	 IPs:
	   IP:           192.168.56.12
	 Controlled By:  DaemonSet/testclient-host
	 Containers:
	   web:
	     Container ID:  containerd://51a82c81bf54f8f7124dbcbe26af9d70e66cd8abd7202210796f35048cd5538d
	     Image:         quay.io/cilium/demo-client:1.0
	     Image ID:      quay.io/cilium/demo-client@sha256:e38e2e222a6f1abe624b3495effbd00a69d9edcf7e623fe6d385b48bb2a52e6a
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       sleep
	     Args:
	       1000h
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bd2dw (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-bd2dw:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/network-unavailable:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testclient-host-6ncqd to k8s2
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/demo-client:1.0" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	 
	 
	 Name:             testclient-host-blrpc
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s1/192.168.56.11
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=789846b94b
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testClientHost
	 Annotations:      <none>
	 Status:           Running
	 IP:               192.168.56.11
	 IPs:
	   IP:           192.168.56.11
	 Controlled By:  DaemonSet/testclient-host
	 Containers:
	   web:
	     Container ID:  containerd://b5e77da7009bafc6a7c4c53f58b32f138653cde297c651ddcb276dc019dd1f8e
	     Image:         quay.io/cilium/demo-client:1.0
	     Image ID:      quay.io/cilium/demo-client@sha256:e38e2e222a6f1abe624b3495effbd00a69d9edcf7e623fe6d385b48bb2a52e6a
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       sleep
	     Args:
	       1000h
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jrxfw (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-jrxfw:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/network-unavailable:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testclient-host-blrpc to k8s1
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/demo-client:1.0" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	 
	 
	 Name:             testclient-mhxw4
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s1/192.168.56.11
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=7d99f86bd9
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testClient
	 Annotations:      <none>
	 Status:           Running
	 IP:               10.0.1.160
	 IPs:
	   IP:           10.0.1.160
	   IP:           fd02::133
	 Controlled By:  DaemonSet/testclient
	 Containers:
	   web:
	     Container ID:  containerd://53fd9f3e7229556a86a8376932d6d46af55608e6d86c61f080fa461bbc17dd27
	     Image:         quay.io/cilium/demo-client:1.0
	     Image ID:      quay.io/cilium/demo-client@sha256:e38e2e222a6f1abe624b3495effbd00a69d9edcf7e623fe6d385b48bb2a52e6a
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       sleep
	     Args:
	       1000h
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rg5jt (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-rg5jt:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testclient-mhxw4 to k8s1
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/demo-client:1.0" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	 
	 
	 Name:             testclient-stfn5
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s2/192.168.56.12
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=7d99f86bd9
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testClient
	 Annotations:      <none>
	 Status:           Running
	 IP:               10.0.0.239
	 IPs:
	   IP:           10.0.0.239
	   IP:           fd02::c
	 Controlled By:  DaemonSet/testclient
	 Containers:
	   web:
	     Container ID:  containerd://ade61afacc5f8e360f02b903610188b7727e7f3a0be4fc283f7d38b859311161
	     Image:         quay.io/cilium/demo-client:1.0
	     Image ID:      quay.io/cilium/demo-client@sha256:e38e2e222a6f1abe624b3495effbd00a69d9edcf7e623fe6d385b48bb2a52e6a
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       sleep
	     Args:
	       1000h
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6mv9s (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-6mv9s:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testclient-stfn5 to k8s2
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/demo-client:1.0" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	 
	 
	 Name:             testserver-host-8kxq8
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s2/192.168.56.12
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=867cf64c8f
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testServerHost
	 Annotations:      <none>
	 Status:           Running
	 IP:               192.168.56.12
	 IPs:
	   IP:           192.168.56.12
	 Controlled By:  DaemonSet/testserver-host
	 Containers:
	   web:
	     Container ID:   containerd://98ee97ee307884ee72f68da703e18f268b757b98c911bce25cf8acdb477602d6
	     Image:          quay.io/cilium/echoserver:1.10.1
	     Image ID:       quay.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      80/TCP
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8ksp (ro)
	   udp:
	     Container ID:   containerd://4d3d9de9f0196a81000be2d7ab45306df2d44ce1331bf7793f5f25d023ebc407
	     Image:          quay.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       quay.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      69/UDP
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c8ksp (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-c8ksp:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule
	                              node-role.kubernetes.io/master:NoSchedule
	                              node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
	                              node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/network-unavailable:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testserver-host-8kxq8 to k8s2
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    3m59s  kubelet            Created container udp
	   Normal  Started    3m59s  kubelet            Started container udp
	 
	 
	 Name:             testserver-host-kdtt8
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s1/192.168.56.11
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=867cf64c8f
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testServerHost
	 Annotations:      <none>
	 Status:           Running
	 IP:               192.168.56.11
	 IPs:
	   IP:           192.168.56.11
	 Controlled By:  DaemonSet/testserver-host
	 Containers:
	   web:
	     Container ID:   containerd://e3bfb77957c5f03739ee1111116153df0001b211050b94d14f4d05c896438435
	     Image:          quay.io/cilium/echoserver:1.10.1
	     Image ID:       quay.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      80/TCP
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4h2cm (ro)
	   udp:
	     Container ID:   containerd://a607bd6b5d86c20aa72d5a0bebe84be9af5e78d1e0a56925a807918146ac8de8
	     Image:          quay.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       quay.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      69/UDP
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4h2cm (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-4h2cm:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule
	                              node-role.kubernetes.io/master:NoSchedule
	                              node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
	                              node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/network-unavailable:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testserver-host-kdtt8 to k8s1
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    3m59s  kubelet            Created container udp
	   Normal  Started    3m59s  kubelet            Started container udp
	 
	 
	 Name:             testserver-khvj7
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s1/192.168.56.11
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=6c84585c97
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testServer
	 Annotations:      <none>
	 Status:           Running
	 IP:               10.0.1.98
	 IPs:
	   IP:           10.0.1.98
	   IP:           fd02::198
	 Controlled By:  DaemonSet/testserver
	 Containers:
	   web:
	     Container ID:   containerd://b5c90ce0faf5f98551e01a0c5aab2965df051c6031c37a9ad9b2c30485e84b24
	     Image:          quay.io/cilium/echoserver:1.10.1
	     Image ID:       quay.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7wt4 (ro)
	   udp:
	     Container ID:   containerd://fe4a5cf4dc2922bfcac1a19ae262beb5c756cafff3d6073ea134827959069cc0
	     Image:          quay.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       quay.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Fri, 21 Apr 2023 08:19:41 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7wt4 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-q7wt4:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule
	                              node-role.kubernetes.io/master:NoSchedule
	                              node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
	                              node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testserver-khvj7 to k8s1
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    3m59s  kubelet            Created container web
	   Normal  Started    3m59s  kubelet            Started container web
	   Normal  Pulled     3m59s  kubelet            Container image "quay.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    3m59s  kubelet            Created container udp
	   Normal  Started    3m59s  kubelet            Started container udp
	 
	 
	 Name:             testserver-ztw2n
	 Namespace:        202304210819k8sdatapathconfighostfirewallwithnativerouting
	 Priority:         0
	 Service Account:  default
	 Node:             k8s2/192.168.56.12
	 Start Time:       Fri, 21 Apr 2023 08:19:40 +0000
	 Labels:           controller-revision-hash=6c84585c97
	                   pod-template-generation=1
	                   test=hostfw
	                   zgroup=testServer
	 Annotations:      <none>
	 Status:           Pending
	 IP:               
	 IPs:              <none>
	 Controlled By:    DaemonSet/testserver
	 Containers:
	   web:
	     Container ID:   
	     Image:          quay.io/cilium/echoserver:1.10.1
	     Image ID:       
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Waiting
	       Reason:       ContainerCreating
	     Ready:          False
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8j7n (ro)
	   udp:
	     Container ID:   
	     Image:          quay.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Waiting
	       Reason:       ContainerCreating
	     Ready:          False
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8j7n (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-x8j7n:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node-role.kubernetes.io/control-plane:NoSchedule
	                              node-role.kubernetes.io/master:NoSchedule
	                              node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
	                              node.kubernetes.io/disk-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/memory-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/not-ready:NoExecute op=Exists
	                              node.kubernetes.io/pid-pressure:NoSchedule op=Exists
	                              node.kubernetes.io/unreachable:NoExecute op=Exists
	                              node.kubernetes.io/unschedulable:NoSchedule op=Exists
	 Events:
	   Type    Reason     Age   From               Message
	   ----    ------     ----  ----               -------
	   Normal  Scheduled  4m    default-scheduler  Successfully assigned 202304210819k8sdatapathconfighostfirewallwithnativerouting/testserver-ztw2n to k8s2
	 
Stderr:
 	 

FAIL: Pods are not ready in time: timed out waiting for pods with filter  to be ready: 4m0s timeout expired
=== Test Finished at 2023-04-21T08:23:40Z====
08:23:40 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
08:23:40 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                    NAME                              READY   STATUS              RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testclient-host-6ncqd             1/1     Running             0          4m5s    192.168.56.12   k8s2   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testclient-host-blrpc             1/1     Running             0          4m5s    192.168.56.11   k8s1   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testclient-mhxw4                  1/1     Running             0          4m5s    10.0.1.160      k8s1   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testclient-stfn5                  1/1     Running             0          4m5s    10.0.0.239      k8s2   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testserver-host-8kxq8             2/2     Running             0          4m5s    192.168.56.12   k8s2   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testserver-host-kdtt8             2/2     Running             0          4m5s    192.168.56.11   k8s1   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testserver-khvj7                  2/2     Running             0          4m5s    10.0.1.98       k8s1   <none>           <none>
	 202304210819k8sdatapathconfighostfirewallwithnativerouting   testserver-ztw2n                  0/2     ContainerCreating   0          4m5s    <none>          k8s2   <none>           <none>
	 cilium-monitoring                                            grafana-98b4b9789-8qdp7           0/1     Running             0          29m     10.0.0.190      k8s1   <none>           <none>
	 cilium-monitoring                                            prometheus-6f66c554f4-vhpln       1/1     Running             0          29m     10.0.0.199      k8s1   <none>           <none>
	 kube-system                                                  cilium-bp62x                      1/1     Running             0          4m52s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                  cilium-operator-f5bcfb649-7bpcz   1/1     Running             0          4m52s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                  cilium-operator-f5bcfb649-wk2zv   1/1     Running             0          4m52s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  cilium-ttb4v                      1/1     Running             0          4m52s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  coredns-567b6dd84-4kczc           1/1     Running             0          4m35s   10.0.0.163      k8s2   <none>           <none>
	 kube-system                                                  etcd-k8s1                         1/1     Running             0          35m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  kube-apiserver-k8s1               1/1     Running             0          35m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  kube-controller-manager-k8s1      1/1     Running             0          35m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  kube-proxy-lnl7k                  1/1     Running             0          34m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  kube-proxy-sc8md                  1/1     Running             0          30m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                  kube-scheduler-k8s1               1/1     Running             0          35m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  log-gatherer-flvb9                1/1     Running             0          30m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                  log-gatherer-r85w4                1/1     Running             0          30m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                  registry-adder-4wgtr              1/1     Running             0          30m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                  registry-adder-k2w6d              1/1     Running             0          30m     192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-bp62x cilium-ttb4v]
cmd: kubectl exec -n kube-system cilium-bp62x -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.25 (v1.25.0) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe14:dcac, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12 (Direct Routing)]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.13.90 (v1.13.90-23d9b0de)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.0.36, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1212/65535 (1.85%), Flows/s: 3.98   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-04-21T08:22:20Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-bp62x -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                 IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                      
	 57         Disabled           Disabled          23927      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                  fd02::44   10.0.0.163   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                    
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                             
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                 
	                                                            k8s:k8s-app=kube-dns                                                                                                                                        
	 1020       Disabled           Disabled          52960      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304210819k8sdatapathconfighostfirewallwithnativerouting   fd02::c    10.0.0.239   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                    
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                             
	                                                            k8s:io.kubernetes.pod.namespace=202304210819k8sdatapathconfighostfirewallwithnativerouting                                                                  
	                                                            k8s:test=hostfw                                                                                                                                             
	                                                            k8s:zgroup=testClient                                                                                                                                       
	 2510       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                          ready   
	                                                            k8s:status=lockdown                                                                                                                                         
	                                                            reserved:host                                                                                                                                               
	 2666       Disabled           Disabled          4          reserved:health                                                                                                             fd02::da   10.0.0.173   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-ttb4v -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.25 (v1.25.0) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fee8:b05b, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11 (Direct Routing)]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.13.90 (v1.13.90-23d9b0de)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.172, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 895/65535 (1.37%), Flows/s: 2.86   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-04-21T08:22:20Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-ttb4v -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                 IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                       
	 284        Disabled           Disabled          52960      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304210819k8sdatapathconfighostfirewallwithnativerouting   fd02::133   10.0.1.160   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202304210819k8sdatapathconfighostfirewallwithnativerouting                                                                   
	                                                            k8s:test=hostfw                                                                                                                                              
	                                                            k8s:zgroup=testClient                                                                                                                                        
	 614        Disabled           Disabled          4          reserved:health                                                                                                             fd02::132   10.0.1.108   ready   
	 3841       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                           ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                    
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                  
	                                                            k8s:status=lockdown                                                                                                                                          
	                                                            reserved:host                                                                                                                                                
	 3892       Disabled           Disabled          29954      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304210819k8sdatapathconfighostfirewallwithnativerouting   fd02::198   10.0.1.98    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                              
	                                                            k8s:io.kubernetes.pod.namespace=202304210819k8sdatapathconfighostfirewallwithnativerouting                                                                   
	                                                            k8s:test=hostfw                                                                                                                                              
	                                                            k8s:zgroup=testServer                                                                                                                                        
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
08:23:53 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
08:23:53 STEP: Deleting deployment demo_hostfw.yaml
08:23:53 STEP: Deleting namespace 202304210819k8sdatapathconfighostfirewallwithnativerouting
08:24:08 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b44d7cd1_K8sDatapathConfig_Host_firewall_With_native_routing.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-5.4//66/artifact/b44d7cd1_K8sDatapathConfig_Host_firewall_With_native_routing.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-5.4//66/artifact/test_results_Cilium-PR-K8s-1.25-kernel-5.4_66_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-5.4/66/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions