Skip to content

CI: K8sDatapathConfig Host firewall With VXLAN and endpoint routes #25342

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig Host firewall With VXLAN and endpoint routes

Failure Output

FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-fgprl"'s policy revision: cannot get policy revision: ""

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-fgprl"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc000612510>: {
        s: "Cannot retrieve \"cilium-fgprl\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:567

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 3
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
UpdateIdentities: Skipping Delete of a non-existing identity
Key allocation attempt failed
Cilium pods: [cilium-7vl98 cilium-fgprl]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                       Ingress   Egress
testclient-pfx8k          false     false
testclient-rkhvq          false     false
testserver-klmcm          false     false
testserver-zl79w          false     false
coredns-6d97d5ddb-jfksf   false     false
Cilium agent 'cilium-7vl98': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-fgprl': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 34 Failed 0


Standard Error

Click to show.
11:42:42 STEP: Installing Cilium
11:42:44 STEP: Waiting for Cilium to become ready
11:43:00 STEP: Validating if Kubernetes DNS is deployed
11:43:00 STEP: Checking if deployment is ready
11:43:00 STEP: Checking if kube-dns service is plumbed correctly
11:43:00 STEP: Checking if pods have identity
11:43:00 STEP: Checking if DNS can resolve
11:43:05 STEP: Kubernetes DNS is not ready: %!s(<nil>)
11:43:05 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
11:43:05 STEP: Waiting for Kubernetes DNS to become operational
11:43:05 STEP: Checking if deployment is ready
11:43:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:43:06 STEP: Checking if deployment is ready
11:43:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:43:07 STEP: Checking if deployment is ready
11:43:14 STEP: Checking if kube-dns service is plumbed correctly
11:43:14 STEP: Checking if pods have identity
11:43:14 STEP: Checking if DNS can resolve
11:43:17 STEP: Validating Cilium Installation
11:43:17 STEP: Performing Cilium controllers preflight check
11:43:17 STEP: Performing Cilium status preflight check
11:43:17 STEP: Performing Cilium health check
11:43:17 STEP: Checking whether host EP regenerated
11:43:25 STEP: Performing Cilium service preflight check
11:43:25 STEP: Performing K8s service preflight check
11:43:31 STEP: Waiting for cilium-operator to be ready
11:43:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
11:43:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
11:43:31 STEP: Making sure all endpoints are in ready state
11:43:34 STEP: Creating namespace 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro
11:43:34 STEP: Deploying demo_hostfw.yaml in namespace 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro
11:43:35 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
11:43:35 STEP: WaitforNPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="")
11:43:39 STEP: WaitforNPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="") => <nil>
11:43:39 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml
11:43:56 STEP: Checking host policies on ingress from local pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
11:43:56 STEP: Checking host policies on egress to local pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: Checking host policies on ingress from remote pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient")
11:43:56 STEP: Checking host policies on ingress from remote node
11:43:56 STEP: Checking host policies on egress to remote node
11:43:56 STEP: Checking host policies on egress to remote pod
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClient") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost")
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServerHost") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testServer") => <nil>
11:43:56 STEP: WaitforPods(namespace="202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro", filter="-l zgroup=testClientHost") => <nil>
FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-fgprl"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc000612510>: {
        s: "Cannot retrieve \"cilium-fgprl\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
=== Test Finished at 2023-05-09T11:44:14Z====
11:44:14 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
11:44:15 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-crdjn              1/1     Running   0          54s    192.168.56.12   k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-host-xwc2b              1/1     Running   0          54s    192.168.56.11   k8s1   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-pfx8k                   1/1     Running   0          55s    10.0.0.19       k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testclient-rkhvq                   1/1     Running   0          55s    10.0.1.168      k8s1   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-6gszv              2/2     Running   0          55s    192.168.56.11   k8s1   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-host-89269              2/2     Running   0          55s    192.168.56.12   k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-klmcm                   2/2     Running   0          55s    10.0.0.95       k8s2   <none>           <none>
	 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   testserver-zl79w                   2/2     Running   0          55s    10.0.1.117      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-67ff49cd99-6vjgl           0/1     Running   0          45m    10.0.0.108      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-8c7df94b4-t2df2         1/1     Running   0          45m    10.0.0.113      k8s1   <none>           <none>
	 kube-system                                                       cilium-7vl98                       1/1     Running   0          105s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-fgprl                       1/1     Running   0          105s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7bc6974595-px8cr   1/1     Running   0          105s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7bc6974595-stspt   1/1     Running   0          105s   192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       coredns-6d97d5ddb-jfksf            1/1     Running   0          84s    10.0.0.62       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          53m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-gxlpz                 1/1     Running   0          45m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-n2rkv                 1/1     Running   0          45m    192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       log-gatherer-xxzgr                 1/1     Running   0          45m    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-7fgns               1/1     Running   0          46m    192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-tnnnx               1/1     Running   0          46m    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-trlbd               1/1     Running   0          46m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-7vl98 cilium-fgprl]
cmd: kubectl exec -n kube-system cilium-7vl98 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fead:5b39, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-361e634c)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       30/30 healthy
	 Proxy Status:            OK, ip 10.0.1.152, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1160/65535 (1.77%), Flows/s: 12.21   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          2/2 reachable   (2023-05-09T11:43:24Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-7vl98 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 707        Disabled           Disabled          64110      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::1cb   10.0.1.168   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testClient                                                                                                                                             
	 783        Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s1                                                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                         
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                       
	                                                            k8s:status=lockdown                                                                                                                                               
	                                                            reserved:host                                                                                                                                                     
	 977        Disabled           Disabled          6549       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::1ab   10.0.1.117   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                   
	                                                            k8s:test=hostfw                                                                                                                                                   
	                                                            k8s:zgroup=testServer                                                                                                                                             
	 3650       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::146   10.0.1.110   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-fgprl -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.26 (v1.26.3) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe8d:46df, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 Cilium:                  Ok   1.14.0-dev (v1.14.0-dev-361e634c)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       34/34 healthy
	 Proxy Status:            OK, ip 10.0.0.124, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok              Current/Max Flows: 1087/65535 (1.66%), Flows/s: 11.47   Metrics: Disabled
	 Encryption:              Disabled        
	 Cluster health:          1/2 reachable   (2023-05-09T11:44:12Z)
	   Name                   IP              Node      Endpoints
	   k8s1                   192.168.56.11   unknown   unreachable
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-fgprl -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                          
	 133        Disabled           Disabled          6549       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::3b   10.0.0.95   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                        
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                 
	                                                            k8s:test=hostfw                                                                                                                                                 
	                                                            k8s:zgroup=testServer                                                                                                                                           
	 434        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::99   10.0.0.60   ready   
	 593        Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s2                                                                                                                              ready   
	                                                            k8s:status=lockdown                                                                                                                                             
	                                                            reserved:host                                                                                                                                                   
	 1906       Disabled           Disabled          3831       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::b    10.0.0.62   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                        
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                     
	                                                            k8s:k8s-app=kube-dns                                                                                                                                            
	 2479       Disabled           Disabled          64110      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro   fd02::11   10.0.0.19   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                        
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro                                                                 
	                                                            k8s:test=hostfw                                                                                                                                                 
	                                                            k8s:zgroup=testClient                                                                                                                                           
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:45:01 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
11:45:01 STEP: Deleting deployment demo_hostfw.yaml
11:45:01 STEP: Deleting namespace 202305091143k8sdatapathconfighostfirewallwithvxlanandendpointro
11:45:17 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|024b6c05_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/024b6c05_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/913c5175_K8sDatapathServicesTest_Checks_N-S_loadbalancing_With_host_policy_Tests_NodePort.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/bf448f31_K8sDatapathConfig_Host_firewall_With_native_routing_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/c1880546_K8sDatapathConfig_Host_firewall_With_native_routing.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2154/artifact/test_results_Cilium-PR-K8s-1.26-kernel-net-next_2154_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/2154/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

Labels

ci/flakeThis is a known failure that occurs in the tree. Please investigate me!

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions