Skip to content

CI: K8sDatapathConfig Check BPF masquerading with ip-masq-agent VXLAN #21120

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig Check BPF masquerading with ip-masq-agent VXLAN

Failure Output

FAIL: Failed to add ip route

See the stacktrace below, somehow the test code is creating a bad ip route add ... command as it is missing an IP address.

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Failed to add ip route
Expected command: kubectl exec -n kube-system log-gatherer-9r72s -- ip route add  via 192.168.56.12 
To succeed, but it failed:
Exitcode: 255 
Err: exit status 255
Stdout:
 	 
Stderr:
 	 Usage: ip route { list | flush } SELECTOR
	        ip route save SELECTOR
	        ip route restore
	        ip route showdump
	        ip route get [ ROUTE_GET_FLAGS ] ADDRESS
	                             [ from ADDRESS iif STRING ]
	                             [ oif STRING ] [ tos TOS ]
	                             [ mark NUMBER ] [ vrf NAME ]
	                             [ uid NUMBER ] [ ipproto PROTOCOL ]
	                             [ sport NUMBER ] [ dport NUMBER ]
	        ip route { add | del | change | append | replace } ROUTE
	 SELECTOR := [ root PREFIX ] [ match PREFIX ] [ exact PREFIX ]
	             [ table TABLE_ID ] [ vrf NAME ] [ proto RTPROTO ]
	             [ type TYPE ] [ scope SCOPE ]
	 ROUTE := NODE_SPEC [ INFO_SPEC ]
	 NODE_SPEC := [ TYPE ] PREFIX [ tos TOS ]
	              [ table TABLE_ID ] [ proto RTPROTO ]
	              [ scope SCOPE ] [ metric METRIC ]
	              [ ttl-propagate { enabled | disabled } ]
	 INFO_SPEC := { NH | nhid ID } OPTIONS FLAGS [ nexthop NH ]...
	 NH := [ encap ENCAPTYPE ENCAPHDR ] [ via [ FAMILY ] ADDRESS ]
	 	    [ dev STRING ] [ weight NUMBER ] NHFLAGS
	 FAMILY := [ inet | inet6 | mpls | bridge | link ]
	 OPTIONS := FLAGS [ mtu NUMBER ] [ advmss NUMBER ] [ as [ to ] ADDRESS ]
	            [ rtt TIME ] [ rttvar TIME ] [ reordering NUMBER ]
	            [ window NUMBER ] [ cwnd NUMBER ] [ initcwnd NUMBER ]
	            [ ssthresh NUMBER ] [ realms REALM ] [ src ADDRESS ]
	            [ rto_min TIME ] [ hoplimit NUMBER ] [ initrwnd NUMBER ]
	            [ features FEATURES ] [ quickack BOOL ] [ congctl NAME ]
	            [ pref PREF ] [ expires TIME ] [ fastopen_no_cookie BOOL ]
	 TYPE := { unicast | local | broadcast | multicast | throw |
	           unreachable | prohibit | blackhole | nat }
	 TABLE_ID := [ local | main | default | all | NUMBER ]
	 SCOPE := [ host | link | global | NUMBER ]
	 NHFLAGS := [ onlink | pervasive ]
	 RTPROTO := [ kernel | boot | static | NUMBER ]
	 PREF := [ low | medium | high ]
	 TIME := NUMBER[s|ms]
	 BOOL := [1|0]
	 FEATURES := ecn
	 ENCAPTYPE := [ mpls | ip | ip6 | seg6 | seg6local ]
	 ENCAPHDR := [ MPLSLABEL | SEG6HDR ]
	 SEG6HDR := [ mode SEGMODE ] segs ADDR1,ADDRi,ADDRn [hmac HMACKEYID] [cleanup]
	 SEGMODE := [ encap | inline ]
	 ROUTE_GET_FLAGS := [ fibmatch ]
	 command terminated with exit code 255
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-net-next/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:1031

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Key allocation attempt failed
Cilium pods: [cilium-ncgzg cilium-rkxqw]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-7c8c9684bb-xgfbp   false     false
test-k8s2-6475f4b759-btvjm    false     false
testclient-2-d4lf7            false     false
testclient-2-lpbkl            false     false
testclient-67bg7              false     false
grafana-59957b9549-p2l6f      false     false
testclient-k8vnz              false     false
testds-5svpw                  false     false
testds-86v4m                  false     false
coredns-567b6dd84-lrtpj       false     false
Cilium agent 'cilium-ncgzg': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0
Cilium agent 'cilium-rkxqw': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 41 Failed 0


Standard Error

Click to show.
23:45:20 STEP: Installing Cilium
23:45:22 STEP: Waiting for Cilium to become ready
23:45:41 STEP: Validating if Kubernetes DNS is deployed
23:45:41 STEP: Checking if deployment is ready
23:45:41 STEP: Checking if kube-dns service is plumbed correctly
23:45:41 STEP: Checking if pods have identity
23:45:41 STEP: Checking if DNS can resolve
23:45:56 STEP: Kubernetes DNS is not ready: unable to resolve service name kubernetes.default.svc.cluster.local with DNS server 10.96.0.10 by running 'dig +short kubernetes.default.svc.cluster.local @10.96.0.10' Cilium pod: Exitcode: 9 
Err: exit status 9
Stdout:
 	 ;; connection timed out; no servers could be reached
	 
	 
Stderr:
 	 command terminated with exit code 9
	 

23:45:56 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
23:45:57 STEP: Waiting for Kubernetes DNS to become operational
23:45:57 STEP: Checking if deployment is ready
23:45:57 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
23:45:58 STEP: Checking if deployment is ready
23:45:58 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
23:45:59 STEP: Checking if deployment is ready
23:45:59 STEP: Checking if kube-dns service is plumbed correctly
23:45:59 STEP: Checking if DNS can resolve
23:45:59 STEP: Checking if pods have identity
23:45:59 STEP: Validating Cilium Installation
23:45:59 STEP: Performing Cilium controllers preflight check
23:45:59 STEP: Performing Cilium health check
23:45:59 STEP: Performing Cilium status preflight check
23:45:59 STEP: Checking whether host EP regenerated
23:46:00 STEP: Performing Cilium service preflight check
23:46:02 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-ncgzg [{0xc00067f940 0xc0004e2a20} {0xc00067fb40 0xc0004e2a28} {0xc00067fc40 0xc0004e2a30} {0xc00067fdc0 0xc0004e2a38} {0xc00067ff40 0xc0004e2a40} {0xc0015ffb80 0xc0004e2a48}] map[10.106.143.148:443:[192.168.56.11:4244 (6) (1) 192.168.56.12:4244 (6) (2) 0.0.0.0:0 (6) (0) [ClusterIP, non-routable]] 10.107.156.130:9090:[0.0.0.0:0 (5) (0) [ClusterIP, non-routable] 10.0.0.253:9090 (5) (1)] 10.108.129.8:3000:[10.0.0.207:3000 (4) (1) 0.0.0.0:0 (4) (0) [ClusterIP, non-routable]] 10.96.0.10:53:[10.0.1.143:53 (2) (1) 10.0.0.49:53 (2) (2) 0.0.0.0:0 (2) (0) [ClusterIP, non-routable]] 10.96.0.10:9153:[10.0.1.143:9153 (3) (1) 0.0.0.0:0 (3) (0) [ClusterIP, non-routable] 10.0.0.49:9153 (3) (2)] 10.96.0.1:443:[0.0.0.0:0 (1) (0) [ClusterIP, non-routable] 192.168.56.11:6443 (1) (1)]]}: Could not match cilium service backend address 10.0.0.49:9153 with k8s endpoint
23:46:02 STEP: Performing Cilium controllers preflight check
23:46:02 STEP: Performing Cilium status preflight check
23:46:02 STEP: Performing Cilium health check
23:46:02 STEP: Checking whether host EP regenerated
23:46:03 STEP: Performing Cilium service preflight check
23:46:04 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-ncgzg [{0xc0040dc940 0xc0004e2ba0} {0xc0040dca80 0xc0004e2ba8} {0xc0040dcb80 0xc0004e2bb0} {0xc0040dcc80 0xc0004e2bb8} {0xc0040dcd80 0xc0004e2bc0} {0xc0040dce80 0xc0004e2bc8}] map[10.106.143.148:443:[192.168.56.11:4244 (6) (1) 192.168.56.12:4244 (6) (2) 0.0.0.0:0 (6) (0) [ClusterIP, non-routable]] 10.107.156.130:9090:[0.0.0.0:0 (5) (0) [ClusterIP, non-routable] 10.0.0.253:9090 (5) (1)] 10.108.129.8:3000:[10.0.0.207:3000 (4) (1) 0.0.0.0:0 (4) (0) [ClusterIP, non-routable]] 10.96.0.10:53:[10.0.1.143:53 (2) (1) 10.0.0.49:53 (2) (2) 0.0.0.0:0 (2) (0) [ClusterIP, non-routable]] 10.96.0.10:9153:[10.0.1.143:9153 (3) (1) 0.0.0.0:0 (3) (0) [ClusterIP, non-routable] 10.0.0.49:9153 (3) (2)] 10.96.0.1:443:[0.0.0.0:0 (1) (0) [ClusterIP, non-routable] 192.168.56.11:6443 (1) (1)]]}: Could not match cilium service backend address 10.0.0.49:53 with k8s endpoint
23:46:04 STEP: Performing Cilium controllers preflight check
23:46:04 STEP: Performing Cilium status preflight check
23:46:04 STEP: Performing Cilium health check
23:46:04 STEP: Checking whether host EP regenerated
23:46:05 STEP: Performing Cilium service preflight check
23:46:05 STEP: Performing K8s service preflight check
23:46:06 STEP: Waiting for cilium-operator to be ready
23:46:07 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
23:46:07 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
23:46:07 STEP: Making sure all endpoints are in ready state
23:46:08 STEP: Creating namespace 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag
23:46:08 STEP: Deploying demo_ds.yaml in namespace 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag
23:46:08 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
23:46:11 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
23:46:11 STEP: WaitforNPods(namespace="202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag", filter="")
23:46:11 STEP: WaitforNPods(namespace="202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag", filter="") => <nil>
23:46:11 STEP: Making ten curl requests from "testclient-67bg7" to "http://192.168.56.13:80"
23:46:12 STEP: Making ten curl requests from "testclient-k8vnz" to "http://192.168.56.13:80"
FAIL: Failed to add ip route
Expected command: kubectl exec -n kube-system log-gatherer-9r72s -- ip route add  via 192.168.56.12 
To succeed, but it failed:
Exitcode: 255 
Err: exit status 255
Stdout:
 	 
Stderr:
 	 Usage: ip route { list | flush } SELECTOR
	        ip route save SELECTOR
	        ip route restore
	        ip route showdump
	        ip route get [ ROUTE_GET_FLAGS ] ADDRESS
	                             [ from ADDRESS iif STRING ]
	                             [ oif STRING ] [ tos TOS ]
	                             [ mark NUMBER ] [ vrf NAME ]
	                             [ uid NUMBER ] [ ipproto PROTOCOL ]
	                             [ sport NUMBER ] [ dport NUMBER ]
	        ip route { add | del | change | append | replace } ROUTE
	 SELECTOR := [ root PREFIX ] [ match PREFIX ] [ exact PREFIX ]
	             [ table TABLE_ID ] [ vrf NAME ] [ proto RTPROTO ]
	             [ type TYPE ] [ scope SCOPE ]
	 ROUTE := NODE_SPEC [ INFO_SPEC ]
	 NODE_SPEC := [ TYPE ] PREFIX [ tos TOS ]
	              [ table TABLE_ID ] [ proto RTPROTO ]
	              [ scope SCOPE ] [ metric METRIC ]
	              [ ttl-propagate { enabled | disabled } ]
	 INFO_SPEC := { NH | nhid ID } OPTIONS FLAGS [ nexthop NH ]...
	 NH := [ encap ENCAPTYPE ENCAPHDR ] [ via [ FAMILY ] ADDRESS ]
	 	    [ dev STRING ] [ weight NUMBER ] NHFLAGS
	 FAMILY := [ inet | inet6 | mpls | bridge | link ]
	 OPTIONS := FLAGS [ mtu NUMBER ] [ advmss NUMBER ] [ as [ to ] ADDRESS ]
	            [ rtt TIME ] [ rttvar TIME ] [ reordering NUMBER ]
	            [ window NUMBER ] [ cwnd NUMBER ] [ initcwnd NUMBER ]
	            [ ssthresh NUMBER ] [ realms REALM ] [ src ADDRESS ]
	            [ rto_min TIME ] [ hoplimit NUMBER ] [ initrwnd NUMBER ]
	            [ features FEATURES ] [ quickack BOOL ] [ congctl NAME ]
	            [ pref PREF ] [ expires TIME ] [ fastopen_no_cookie BOOL ]
	 TYPE := { unicast | local | broadcast | multicast | throw |
	           unreachable | prohibit | blackhole | nat }
	 TABLE_ID := [ local | main | default | all | NUMBER ]
	 SCOPE := [ host | link | global | NUMBER ]
	 NHFLAGS := [ onlink | pervasive ]
	 RTPROTO := [ kernel | boot | static | NUMBER ]
	 PREF := [ low | medium | high ]
	 TIME := NUMBER[s|ms]
	 BOOL := [1|0]
	 FEATURES := ecn
	 ENCAPTYPE := [ mpls | ip | ip6 | seg6 | seg6local ]
	 ENCAPHDR := [ MPLSLABEL | SEG6HDR ]
	 SEG6HDR := [ mode SEGMODE ] segs ADDR1,ADDRi,ADDRn [hmac HMACKEYID] [cleanup]
	 SEGMODE := [ encap | inline ]
	 ROUTE_GET_FLAGS := [ fibmatch ]
	 command terminated with exit code 255
	 

=== Test Finished at 2022-08-26T23:46:12Z====
23:46:12 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
23:46:13 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   test-k8s2-6475f4b759-btvjm        2/2     Running   0          6s      10.0.1.234      k8s2   <none>           <none>
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testclient-2-d4lf7                1/1     Running   0          6s      10.0.1.218      k8s2   <none>           <none>
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testclient-2-lpbkl                1/1     Running   0          6s      10.0.0.19       k8s1   <none>           <none>
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testclient-67bg7                  1/1     Running   0          6s      10.0.0.122      k8s1   <none>           <none>
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testclient-k8vnz                  1/1     Running   0          6s      10.0.1.245      k8s2   <none>           <none>
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testds-5svpw                      2/2     Running   0          6s      10.0.1.94       k8s2   <none>           <none>
	 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   testds-86v4m                      2/2     Running   0          6s      10.0.0.47       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-59957b9549-p2l6f          1/1     Running   0          52m     10.0.0.207      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-7c8c9684bb-xgfbp       1/1     Running   0          52m     10.0.0.253      k8s1   <none>           <none>
	 default                                                           echoserver-zmmhv                  1/1     Running   0          3m29s   192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       cilium-ncgzg                      1/1     Running   0          52s     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-68bd5bc55-jd2v4   1/1     Running   0          52s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-68bd5bc55-vrc87   1/1     Running   0          52s     192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       cilium-rkxqw                      1/1     Running   0          52s     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-567b6dd84-lrtpj           1/1     Running   0          17s     10.0.1.143      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running   0          61m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running   0          61m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running   0          61m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running   0          61m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-9r72s                1/1     Running   0          52m     192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       log-gatherer-fd698                1/1     Running   0          52m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-krkbt                1/1     Running   0          52m     192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-qw4zt              1/1     Running   0          52m     192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-rdfpb              1/1     Running   0          52m     192.168.56.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-tfzxl              1/1     Running   0          52m     192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-ncgzg cilium-rkxqw]
cmd: kubectl exec -n kube-system cilium-ncgzg -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.25 (v1.25.0) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe36:ee32, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.12.90 (v1.12.90-3f2b5f1e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            BPF
	 Masquerading:            BPF (ip-masq-agent)   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       43/43 healthy
	 Proxy Status:            OK, ip 10.0.0.182, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 356/65535 (0.54%), Flows/s: 6.88   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2022-08-26T23:46:05Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-ncgzg -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 207        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::62   10.0.0.185   ready   
	 537        Enabled            Disabled          28086      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::6f   10.0.0.47    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1241       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 1342       Disabled           Disabled          16453      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::f2   10.0.0.19    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                  
	                                                            k8s:zgroup=testDSClient2                                                                                                                                         
	 2287       Disabled           Disabled          1305       k8s:app=grafana                                                                                                                  fd02::26   10.0.0.207   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 3350       Disabled           Disabled          408        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::25   10.0.0.122   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 3960       Disabled           Disabled          23045      k8s:app=prometheus                                                                                                               fd02::49   10.0.0.253   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rkxqw -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                 Ok   Disabled
	 Kubernetes:              Ok   1.25 (v1.25.0) [linux/amd64]
	 Kubernetes APIs:         ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:    Strict   [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fedd:f464, enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12]
	 Host firewall:           Disabled
	 CNI Chaining:            none
	 Cilium:                  Ok   1.12.90 (v1.12.90-3f2b5f1e)
	 NodeMonitor:             Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:    Ok   
	 IPAM:                    IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd02::100/120
	 IPv6 BIG TCP:            Disabled
	 BandwidthManager:        Disabled
	 Host Routing:            BPF
	 Masquerading:            BPF (ip-masq-agent)   [enp0s16, enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:       41/41 healthy
	 Proxy Status:            OK, ip 10.0.1.26, 0 redirects active on ports 10000-20000
	 Global Identity Range:   min 256, max 65535
	 Hubble:                  Ok   Current/Max Flows: 463/65535 (0.71%), Flows/s: 6.75   Metrics: Disabled
	 Encryption:              Disabled
	 Cluster health:          2/2 reachable   (2022-08-26T23:46:06Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-rkxqw -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 6          Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 233        Enabled            Disabled          28086      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::185   10.0.1.94    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 389        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::189   10.0.1.237   ready   
	 406        Disabled           Disabled          408        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::11b   10.0.1.245   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 493        Disabled           Disabled          16453      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::15d   10.0.1.218   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=testDSClient2                                                                                                                                          
	 3744       Disabled           Disabled          61781      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag   fd02::14d   10.0.1.234   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 3770       Disabled           Disabled          56837      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::11a   10.0.1.143   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
23:46:26 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
23:46:26 STEP: Deleting deployment demo_ds.yaml
23:46:27 STEP: Deleting namespace 202208262346k8sdatapathconfigcheckbpfmasqueradingwithip-masq-ag
23:46:42 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|de053fc8_K8sDatapathConfig_Check_BPF_masquerading_with_ip-masq-agent_VXLAN.zip]]
23:46:47 STEP: Running AfterAll block for EntireTestsuite K8sDatapathConfig Check BPF masquerading with ip-masq-agent


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next//15/artifact/90e0bb64_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next//15/artifact/de053fc8_K8sDatapathConfig_Check_BPF_masquerading_with_ip-masq-agent_VXLAN.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next//15/artifact/test_results_Cilium-PR-K8s-1.25-kernel-net-next_15_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-net-next/15/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

Labels

ci/flakeThis is a known failure that occurs in the tree. Please investigate me!kind/bug/CIThis is a bug in the testing code.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions