Skip to content

CI: K8sDatapathConfig Host firewall With VXLAN: Failed to reach #16122

@pchaigno

Description

@pchaigno

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/74/testReport/junit/Suite-k8s-1/19/K8sDatapathConfig_Host_firewall_With_VXLAN/
K8sDatapathConfig_Host_firewall_With_VXLAN.7z.zip (I had to recompress to be able to upload.)

The exit status 7 seems interesting here, we don't usually get that code.

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Failed to reach 10.0.0.100:80 from testclient-host-5rpvw
Expected command: kubectl exec -n 202105121057k8sdatapathconfighostfirewallwithvxlan testclient-host-5rpvw -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.0.0.100:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.000022()', Connect: '0.000000',Transfer '0.000000', total '2.202762'
Stderr:
 	 command terminated with exit code 7
	 

/usr/local/go/src/runtime/asm_amd64.s:1371

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 13
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Unable to update ipcache map entry on pod add
Unable to restore endpoint, ignoring
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Found incomplete restore directory /var/run/cilium/state/3828_next_fail. Removing it...
Cilium pods: [cilium-ms4q6 cilium-qsx99]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testclient-ms94k                        
testserver-2x74p                        
testserver-ktqx6                        
grafana-d69c97b9b-qt7qh                 
prometheus-655fb888d7-qbk4d             
coredns-867bf6789f-qlkhh                
testclient-757ww                        
Cilium agent 'cilium-ms4q6': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 39 Failed 0
Cilium agent 'cilium-qsx99': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 35 Failed 0

Standard Error

Click to show.
10:54:41 STEP: Installing Cilium
10:54:42 STEP: Waiting for Cilium to become ready
10:57:01 STEP: Validating if Kubernetes DNS is deployed
10:57:01 STEP: Checking if deployment is ready
10:57:01 STEP: Checking if kube-dns service is plumbed correctly
10:57:01 STEP: Checking if pods have identity
10:57:01 STEP: Checking if DNS can resolve
10:57:02 STEP: Kubernetes DNS is up and operational
10:57:02 STEP: Validating Cilium Installation
10:57:02 STEP: Performing Cilium status preflight check
10:57:02 STEP: Performing Cilium controllers preflight check
10:57:02 STEP: Performing Cilium health check
10:57:03 STEP: Performing Cilium service preflight check
10:57:03 STEP: Performing K8s service preflight check
10:57:04 STEP: Waiting for cilium-operator to be ready
10:57:04 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:57:04 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:57:05 STEP: Making sure all endpoints are in ready state
10:57:06 STEP: Creating namespace 202105121057k8sdatapathconfighostfirewallwithvxlan
10:57:06 STEP: Deploying demo_hostfw.yaml in namespace 202105121057k8sdatapathconfighostfirewallwithvxlan
10:57:06 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready
10:57:06 STEP: WaitforNPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="")
10:57:16 STEP: WaitforNPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="") => <nil>
10:57:16 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/host-policies.yaml
10:57:20 STEP: Checking host policies on egress to remote node
10:57:20 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
10:57:20 STEP: Checking host policies on ingress from remote pod
10:57:20 STEP: Checking host policies on egress to local pod
10:57:20 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
10:57:20 STEP: Checking host policies on ingress from local pod
10:57:20 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient")
10:57:20 STEP: Checking host policies on egress to remote pod
10:57:20 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
10:57:20 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient")
10:57:20 STEP: Checking host policies on ingress from remote node
10:57:20 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost")
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => <nil>
10:57:21 STEP: WaitforPods(namespace="202105121057k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => <nil>
FAIL: Failed to reach 10.0.0.100:80 from testclient-host-5rpvw
Expected command: kubectl exec -n 202105121057k8sdatapathconfighostfirewallwithvxlan testclient-host-5rpvw -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.0.0.100:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.000022()', Connect: '0.000000',Transfer '0.000000', total '2.202762'
Stderr:
 	 command terminated with exit code 7
	 

FAIL: Failed to reach 10.0.1.44:80 from testclient-host-5rpvw
Expected command: kubectl exec -n 202105121057k8sdatapathconfighostfirewallwithvxlan testclient-host-5rpvw -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.0.1.44:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.000020()', Connect: '0.000000',Transfer '0.000000', total '2.202495'
Stderr:
 	 command terminated with exit code 7
	 

FAIL: Failed to reach 192.168.36.11:80 from testclient-757ww
Expected command: kubectl exec -n 202105121057k8sdatapathconfighostfirewallwithvxlan testclient-757ww -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.11:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 28 
Err: exit status 28
Stdout:
 	 time-> DNS: '0.000021()', Connect: '0.000000',Transfer '0.000000', total '5.000860'
Stderr:
 	 command terminated with exit code 28
	 

=== Test Finished at 2021-05-12T10:57:26Z====
10:57:26 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:57:27 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                            NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testclient-757ww                   1/1     Running   0          24s     10.0.0.110      k8s1   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testclient-host-5rpvw              1/1     Running   0          24s     192.168.36.11   k8s1   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testclient-host-tqcr9              1/1     Running   0          24s     192.168.36.12   k8s2   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testclient-ms94k                   1/1     Running   0          24s     10.0.1.123      k8s2   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testserver-2x74p                   2/2     Running   0          24s     10.0.1.44       k8s2   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testserver-host-czbxg              2/2     Running   0          24s     192.168.36.12   k8s2   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testserver-host-sfm6m              2/2     Running   0          24s     192.168.36.11   k8s1   <none>           <none>
	 202105121057k8sdatapathconfighostfirewallwithvxlan   testserver-ktqx6                   2/2     Running   0          24s     10.0.0.100      k8s1   <none>           <none>
	 cilium-monitoring                                    grafana-d69c97b9b-qt7qh            1/1     Running   0          22m     10.0.1.81       k8s2   <none>           <none>
	 cilium-monitoring                                    prometheus-655fb888d7-qbk4d        1/1     Running   0          22m     10.0.1.9        k8s2   <none>           <none>
	 kube-system                                          cilium-ms4q6                       1/1     Running   0          2m48s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                          cilium-operator-7c585dc4d4-rsv4w   1/1     Running   0          2m48s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                          cilium-operator-7c585dc4d4-xw78w   1/1     Running   0          2m48s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          cilium-qsx99                       1/1     Running   0          2m48s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          coredns-867bf6789f-qlkhh           1/1     Running   0          22m     10.0.0.46       k8s1   <none>           <none>
	 kube-system                                          etcd-k8s1                          1/1     Running   0          25m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          kube-apiserver-k8s1                1/1     Running   0          25m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          kube-controller-manager-k8s1       1/1     Running   0          25m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          kube-proxy-5v4qr                   1/1     Running   0          23m     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                          kube-proxy-rbfkr                   1/1     Running   0          25m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          kube-scheduler-k8s1                1/1     Running   0          25m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          log-gatherer-6qbdl                 1/1     Running   0          23m     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                          log-gatherer-vrx7w                 1/1     Running   0          23m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          registry-adder-2dx4w               1/1     Running   0          23m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                          registry-adder-55b7v               1/1     Running   0          23m     192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-ms4q6 cilium-qsx99]
cmd: kubectl exec -n kube-system cilium-ms4q6 -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.10) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-552ec87)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      39/39 healthy
	 Proxy Status:           OK, ip 10.0.1.237, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 554/4095 (13.53%), Flows/s: 9.34   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-05-12T10:57:03Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-ms4q6 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                          IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                
	 169        Disabled           Disabled          13818      k8s:io.cilium.k8s.policy.cluster=default                                             fd02::146   10.0.1.123   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                       
	                                                            k8s:io.kubernetes.pod.namespace=202105121057k8sdatapathconfighostfirewallwithvxlan                                    
	                                                            k8s:zgroup=testClient                                                                                                 
	 345        Disabled           Disabled          8112       k8s:app=grafana                                                                      fd02::192   10.0.1.81    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                              
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                       
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                     
	 585        Disabled           Disabled          35832      k8s:io.cilium.k8s.policy.cluster=default                                             fd02::1f1   10.0.1.44    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                       
	                                                            k8s:io.kubernetes.pod.namespace=202105121057k8sdatapathconfighostfirewallwithvxlan                                    
	                                                            k8s:zgroup=testServer                                                                                                 
	 635        Disabled           Disabled          54502      k8s:app=prometheus                                                                   fd02::11f   10.0.1.9     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                              
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                     
	 1598       Disabled           Disabled          4          reserved:health                                                                      fd02::116   10.0.1.8     ready   
	 1829       Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s2                                                                                    ready   
	                                                            k8s:status=lockdown                                                                                                   
	                                                            reserved:host                                                                                                         
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qsx99 -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.10) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 192.168.36.11 fd04::11 (Direct Routing), enp0s3 10.0.2.15 fd04::11]
	 Cilium:                 Ok   1.10.90 (v1.10.90-552ec87)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      35/35 healthy
	 Proxy Status:           OK, ip 10.0.0.192, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 1527/4095 (37.29%), Flows/s: 10.74   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-05-12T10:57:05Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-qsx99 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                          IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                               
	 86         Disabled           Disabled          4          reserved:health                                                                      fd02::b7   10.0.0.188   ready   
	 345        Disabled           Disabled          35832      k8s:io.cilium.k8s.policy.cluster=default                                             fd02::23   10.0.0.100   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=202105121057k8sdatapathconfighostfirewallwithvxlan                                   
	                                                            k8s:zgroup=testServer                                                                                                
	 1775       Disabled           Disabled          13818      k8s:io.cilium.k8s.policy.cluster=default                                             fd02::da   10.0.0.110   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=202105121057k8sdatapathconfighostfirewallwithvxlan                                   
	                                                            k8s:zgroup=testClient                                                                                                
	 2654       Disabled           Disabled          5026       k8s:io.cilium.k8s.policy.cluster=default                                             fd02::f5   10.0.0.46    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                          
	                                                            k8s:k8s-app=kube-dns                                                                                                 
	 3828       Enabled            Enabled           1          k8s:cilium.io/ci-node=k8s1                                                                                   ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                   
	                                                            reserved:host                                                                                                        
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:57:44 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall
10:57:44 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:57:44 STEP: Deleting deployment demo_hostfw.yaml
10:57:44 STEP: Deleting namespace 202105121057k8sdatapathconfighostfirewallwithvxlan
10:57:59 STEP: Running AfterEach for block EntireTestsuite

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.area/host-firewallImpacts the host firewall or the host endpoint.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions