Skip to content

CI: K8sDatapathConfig Host firewall With VXLAN #24697

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig Host firewall With VXLAN

Failure Output

FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-7vmmc"'s policy revision: cannot get policy revision: ""

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-7vmmc"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc001d92bb0>: {
        s: "Cannot retrieve \"cilium-7vmmc\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:576

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 11
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Unable to restore endpoint, ignoring
Disabling socket-LB tracing as it requires kernel 5.7 or newer
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-7vmmc cilium-88l9h]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                        Ingress   Egress
testclient-mmhm2           false     false
testclient-qhnx6           false     false
testserver-5b46z           false     false
testserver-k6d54           false     false
coredns-6b775575b5-flxz9   false     false
Cilium agent 'cilium-7vmmc': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0
Cilium agent 'cilium-88l9h': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 34 Failed 0


Standard Error

Click to show.
08:46:47 STEP: Installing Cilium
08:46:49 STEP: Waiting for Cilium to become ready
08:47:11 STEP: Validating if Kubernetes DNS is deployed
08:47:11 STEP: Checking if deployment is ready
08:47:11 STEP: Checking if kube-dns service is plumbed correctly
08:47:11 STEP: Checking if DNS can resolve
08:47:11 STEP: Checking if pods have identity
08:47:15 STEP: Kubernetes DNS is up and operational
08:47:15 STEP: Validating Cilium Installation
08:47:15 STEP: Performing Cilium controllers preflight check
08:47:15 STEP: Performing Cilium health check
08:47:15 STEP: Performing Cilium status preflight check
08:47:15 STEP: Checking whether host EP regenerated
08:47:23 STEP: Performing Cilium service preflight check
08:47:23 STEP: Performing K8s service preflight check
08:47:23 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-7vmmc': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

08:47:23 STEP: Performing Cilium controllers preflight check
08:47:23 STEP: Checking whether host EP regenerated
08:47:23 STEP: Performing Cilium health check
08:47:23 STEP: Performing Cilium status preflight check
08:47:30 STEP: Performing Cilium service preflight check
08:47:30 STEP: Performing K8s service preflight check
08:47:30 STEP: Performing Cilium status preflight check
08:47:30 STEP: Performing Cilium health check
08:47:30 STEP: Performing Cilium controllers preflight check
08:47:30 STEP: Checking whether host EP regenerated
08:47:37 STEP: Performing Cilium service preflight check
08:47:37 STEP: Performing K8s service preflight check
08:47:43 STEP: Waiting for cilium-operator to be ready
08:47:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
08:47:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
08:47:43 STEP: Making sure all endpoints are in ready state
08:47:46 STEP: Creating namespace 202304030847k8sdatapathconfighostfirewallwithvxlan
08:47:46 STEP: Deploying demo_hostfw.yaml in namespace 202304030847k8sdatapathconfighostfirewallwithvxlan
08:47:46 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
08:47:46 STEP: WaitforNPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="")
08:47:49 STEP: WaitforNPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="") => <nil>
08:47:49 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml
08:47:59 STEP: Checking host policies on egress to remote pod
08:47:59 STEP: Checking host policies on ingress from remote pod
08:47:59 STEP: Checking host policies on ingress from remote node
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
08:47:59 STEP: Checking host policies on egress to remote node
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
08:47:59 STEP: Checking host policies on ingress from local pod
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
08:47:59 STEP: Checking host policies on egress to local pod
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => <nil>
08:47:59 STEP: WaitforPods(namespace="202304030847k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
FAIL: Error deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-5.4/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml: Cannot retrieve "cilium-7vmmc"'s policy revision: cannot get policy revision: ""
Expected
    <*errors.errorString | 0xc001d92bb0>: {
        s: "Cannot retrieve \"cilium-7vmmc\"'s policy revision: cannot get policy revision: \"\"",
    }
to be nil
=== Test Finished at 2023-04-03T08:48:15Z====
08:48:15 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
08:48:15 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                            NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testclient-host-dfqrx              1/1     Running   0          53s     192.168.56.11   k8s1   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testclient-host-nv6dp              1/1     Running   0          53s     192.168.56.12   k8s2   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testclient-mmhm2                   1/1     Running   0          53s     10.0.0.114      k8s1   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testclient-qhnx6                   1/1     Running   0          53s     10.0.1.3        k8s2   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testserver-5b46z                   2/2     Running   0          53s     10.0.0.88       k8s1   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testserver-host-bjwhg              2/2     Running   0          53s     192.168.56.12   k8s2   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testserver-host-mntc2              2/2     Running   0          53s     192.168.56.11   k8s1   <none>           <none>
	 202304030847k8sdatapathconfighostfirewallwithvxlan   testserver-k6d54                   2/2     Running   0          53s     10.0.1.40       k8s2   <none>           <none>
	 cilium-monitoring                                    grafana-84476dcf4b-fz4bg           0/1     Running   0          27m     10.0.0.198      k8s1   <none>           <none>
	 cilium-monitoring                                    prometheus-7dbb447479-d27j8        1/1     Running   0          27m     10.0.0.184      k8s1   <none>           <none>
	 kube-system                                          cilium-7vmmc                       1/1     Running   0          110s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                          cilium-88l9h                       1/1     Running   0          110s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          cilium-operator-6574d4888c-jz5t8   1/1     Running   0          110s    192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          cilium-operator-6574d4888c-rvfnz   1/1     Running   0          110s    192.168.56.12   k8s2   <none>           <none>
	 kube-system                                          coredns-6b775575b5-flxz9           1/1     Running   0          4m14s   10.0.0.115      k8s1   <none>           <none>
	 kube-system                                          etcd-k8s1                          1/1     Running   0          32m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          kube-apiserver-k8s1                1/1     Running   0          32m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          kube-controller-manager-k8s1       1/1     Running   0          32m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          kube-proxy-vmlzl                   1/1     Running   0          28m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                          kube-proxy-xvhrb                   1/1     Running   0          32m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          kube-scheduler-k8s1                1/1     Running   0          32m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          log-gatherer-mdjx9                 1/1     Running   0          27m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          log-gatherer-sztmv                 1/1     Running   0          27m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                          registry-adder-njmd9               1/1     Running   0          28m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                          registry-adder-t667s               1/1     Running   0          28m     192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-7vmmc cilium-88l9h]
cmd: kubectl exec -n kube-system cilium-7vmmc -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.24 (v1.24.4) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe9b:f1e4, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.90 (v1.13.90-4e83f5a8)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       29/29 healthy
	 Proxy Status:            OK, ip 10.0.1.187, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 581/65535 (0.89%), Flows/s: 5.97   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2023-04-03T08:48:26Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-7vmmc -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                         IPv6        IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                              
	 47         Disabled           Disabled          16605      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304030847k8sdatapathconfighostfirewallwithvxlan   fd02::109   10.0.1.3    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=202304030847k8sdatapathconfighostfirewallwithvxlan                                                                  
	                                                            k8s:test=hostfw                                                                                                                                     
	                                                            k8s:zgroup=testClient                                                                                                                               
	 1777       Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s2                                                                                                                  ready   
	                                                            k8s:status=lockdown                                                                                                                                 
	                                                            reserved:host                                                                                                                                       
	 2577       Disabled           Disabled          44924      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304030847k8sdatapathconfighostfirewallwithvxlan   fd02::14c   10.0.1.40   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=202304030847k8sdatapathconfighostfirewallwithvxlan                                                                  
	                                                            k8s:test=hostfw                                                                                                                                     
	                                                            k8s:zgroup=testServer                                                                                                                               
	 3003       Disabled           Disabled          4          reserved:health                                                                                                     fd02::122   10.0.1.7    ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-88l9h -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.24 (v1.24.4) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict    [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fed7:e94, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Enabled   [enp0s16, enp0s3, enp0s8]
	 CNI Chaining:            none
	 CNI Config file:         CNI configuration file management disabled
	 Cilium:                  Ok   1.13.90 (v1.13.90-4e83f5a8)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            Legacy
	 Masquerading:            BPF   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       34/34 healthy
	 Proxy Status:            OK, ip 10.0.0.17, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 1636/65535 (2.50%), Flows/s: 17.65   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2023-04-03T08:48:26Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-88l9h -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                         IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                              
	 408        Disabled           Disabled          44924      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304030847k8sdatapathconfighostfirewallwithvxlan   fd02::73   10.0.0.88    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=202304030847k8sdatapathconfighostfirewallwithvxlan                                                                  
	                                                            k8s:test=hostfw                                                                                                                                     
	                                                            k8s:zgroup=testServer                                                                                                                               
	 849        Disabled           Disabled          18097      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                     10.0.0.115   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                         
	                                                            k8s:k8s-app=kube-dns                                                                                                                                
	 1577       Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s1                                                                                                                  ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                           
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                         
	                                                            k8s:status=lockdown                                                                                                                                 
	                                                            reserved:host                                                                                                                                       
	 3805       Disabled           Disabled          4          reserved:health                                                                                                     fd02::90   10.0.0.178   ready   
	 3870       Disabled           Disabled          16605      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202304030847k8sdatapathconfighostfirewallwithvxlan   fd02::c6   10.0.0.114   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=202304030847k8sdatapathconfighostfirewallwithvxlan                                                                  
	                                                            k8s:test=hostfw                                                                                                                                     
	                                                            k8s:zgroup=testClient                                                                                                                               
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
08:48:46 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
08:48:46 STEP: Deleting deployment demo_hostfw.yaml
08:48:46 STEP: Deleting namespace 202304030847k8sdatapathconfighostfirewallwithvxlan
08:49:02 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|d5e33947_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1553/artifact/0eaf8521_K8sDatapathConfig_Host_firewall_With_native_routing_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1553/artifact/8a16d34d_K8sDatapathConfig_Host_firewall_With_native_routing.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1553/artifact/aec20a4d_K8sDatapathConfig_Host_firewall_With_VXLAN_and_endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1553/artifact/d5e33947_K8sDatapathConfig_Host_firewall_With_VXLAN.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4//1553/artifact/test_results_Cilium-PR-K8s-1.24-kernel-5.4_1553_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-5.4/1553/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions