Skip to content

v1.11: CI: K8sDatapathConfig Encapsulation Check connectivity with sockops and VXLAN encapsulation #17069

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sDatapathConfig Encapsulation Check connectivity with sockops and VXLAN encapsulation

Failure Output

FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-zdm75 policy revision: cannot get revision from json: could not parse JSON from command "kubectl exec -n kube-system cilium-zdm75 -- cilium policy get -o json"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-zdm75 policy revision: cannot get revision from json: could not parse JSON from command "kubectl exec -n kube-system cilium-zdm75 -- cilium policy get -o json"
unexpected end of JSON input

Expected
    <*errors.errorString | 0xc00099dbe0>: {
        s: "Cannot retrieve cilium pod cilium-zdm75 policy revision: cannot get revision from json: could not parse JSON from command \"kubectl exec -n kube-system cilium-zdm75 -- cilium policy get -o json\"\nunexpected end of JSON input\n",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:1039

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-kr2tf cilium-zdm75]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
testds-24frc                           
testds-647jj                           
coredns-867bf6789f-6c8cw               
test-k8s2-79ff876c9d-b276p             
testclient-6892j                       
testclient-jk2sz                       
Cilium agent 'cilium-kr2tf': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0
Cilium agent 'cilium-zdm75': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 34 Failed 0


Standard Error

Click to show.
13:48:36 STEP: Installing Cilium
13:48:37 STEP: Waiting for Cilium to become ready
13:49:22 STEP: Validating if Kubernetes DNS is deployed
13:49:22 STEP: Checking if deployment is ready
13:49:23 STEP: Checking if kube-dns service is plumbed correctly
13:49:23 STEP: Checking if pods have identity
13:49:23 STEP: Checking if DNS can resolve
13:49:24 STEP: Kubernetes DNS is up and operational
13:49:24 STEP: Validating Cilium Installation
13:49:24 STEP: Performing Cilium controllers preflight check
13:49:24 STEP: Performing Cilium status preflight check
13:49:24 STEP: Performing Cilium health check
13:49:26 STEP: Performing Cilium service preflight check
13:49:26 STEP: Performing K8s service preflight check
13:49:27 STEP: Waiting for cilium-operator to be ready
13:49:27 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:49:27 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:49:27 STEP: Making sure all endpoints are in ready state
13:49:28 STEP: Checking that BPF tunnels are in place
13:49:29 STEP: Creating namespace 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith
13:49:29 STEP: Deploying demo_ds.yaml in namespace 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith
13:49:30 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
13:49:33 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
13:49:33 STEP: WaitforNPods(namespace="202108051349k8sdatapathconfigencapsulationcheckconnectivitywith", filter="")
13:49:40 STEP: WaitforNPods(namespace="202108051349k8sdatapathconfigencapsulationcheckconnectivitywith", filter="") => <nil>
13:49:40 STEP: Checking pod connectivity between nodes
13:49:40 STEP: WaitforPods(namespace="202108051349k8sdatapathconfigencapsulationcheckconnectivitywith", filter="-l zgroup=testDSClient")
13:49:41 STEP: WaitforPods(namespace="202108051349k8sdatapathconfigencapsulationcheckconnectivitywith", filter="-l zgroup=testDSClient") => <nil>
13:49:41 STEP: WaitforPods(namespace="202108051349k8sdatapathconfigencapsulationcheckconnectivitywith", filter="-l zgroup=testDS")
13:49:41 STEP: WaitforPods(namespace="202108051349k8sdatapathconfigencapsulationcheckconnectivitywith", filter="-l zgroup=testDS") => <nil>
13:49:46 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
FAIL: Error creating resource /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml: Cannot retrieve cilium pod cilium-zdm75 policy revision: cannot get revision from json: could not parse JSON from command "kubectl exec -n kube-system cilium-zdm75 -- cilium policy get -o json"
unexpected end of JSON input

Expected
    <*errors.errorString | 0xc00099dbe0>: {
        s: "Cannot retrieve cilium pod cilium-zdm75 policy revision: cannot get revision from json: could not parse JSON from command \"kubectl exec -n kube-system cilium-zdm75 -- cilium policy get -o json\"\nunexpected end of JSON input\n",
    }
to be nil
=== Test Finished at 2021-08-05T13:49:46Z====
13:49:46 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:49:47 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith   test-k8s2-79ff876c9d-b276p         2/2     Running   0          21s     10.0.1.201      k8s2   <none>           <none>
	 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith   testclient-6892j                   1/1     Running   0          21s     10.0.1.156      k8s2   <none>           <none>
	 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith   testclient-jk2sz                   1/1     Running   0          21s     10.0.0.19       k8s1   <none>           <none>
	 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith   testds-24frc                       2/2     Running   0          21s     10.0.0.209      k8s1   <none>           <none>
	 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith   testds-647jj                       2/2     Running   0          21s     10.0.1.55       k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-rl4vz            0/1     Running   0          11m     10.0.0.212      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-qtqqt        1/1     Running   0          11m     10.0.1.159      k8s1   <none>           <none>
	 kube-system                                                       cilium-kr2tf                       1/1     Running   0          73s     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5d996b5bdd-b8smk   1/1     Running   0          73s     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-5d996b5bdd-lfz82   1/1     Running   0          73s     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-zdm75                       1/1     Running   0          73s     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-867bf6789f-6c8cw           1/1     Running   0          3m20s   10.0.0.242      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          18m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          18m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          18m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-kxrhf                   1/1     Running   0          16m     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-zx26s                   1/1     Running   0          18m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          18m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-5zhns                 1/1     Running   0          11m     192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-65gsx                 1/1     Running   0          11m     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-jnd2c               1/1     Running   0          16m     192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-v8xpt               1/1     Running   0          16m     192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-kr2tf cilium-zdm75]
cmd: kubectl exec -n kube-system cilium-kr2tf -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-2624ffd)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      33/33 healthy
	 Proxy Status:           OK, ip 10.0.1.10, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 442/4095 (10.79%), Flows/s: 4.47   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-08-05T13:49:26Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kr2tf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 39         Enabled            Disabled          9754       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::11b   10.0.1.55    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202108051349k8sdatapathconfigencapsulationcheckconnectivitywith                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 135        Disabled           Disabled          4          reserved:health                                                                                   fd02::14a   10.0.1.154   ready   
	 528        Disabled           Disabled          23439      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1a9   10.0.1.201   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202108051349k8sdatapathconfigencapsulationcheckconnectivitywith                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 1520       Disabled           Disabled          27044      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1b7   10.0.1.156   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202108051349k8sdatapathconfigencapsulationcheckconnectivitywith                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 1670       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zdm75 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Cilium:                 Ok   1.10.90 (v1.10.90-2624ffd)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      34/34 healthy
	 Proxy Status:           OK, ip 10.0.0.195, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 609/4095 (14.87%), Flows/s: 7.68   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-08-05T13:49:27Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zdm75 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 562        Disabled           Disabled          4          reserved:health                                                                                   fd02::d8   10.0.0.153   ready   
	 741        Disabled           Disabled          27044      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::60   10.0.0.19    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202108051349k8sdatapathconfigencapsulationcheckconnectivitywith                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 1277       Disabled           Disabled          33572      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::28   10.0.0.242   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                              
	 2481       Enabled            Disabled          9754       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::ae   10.0.0.209   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202108051349k8sdatapathconfigencapsulationcheckconnectivitywith                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 3141       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                                
	                                                            reserved:host                                                                                                                     
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:50:14 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:50:14 STEP: Deleting deployment demo_ds.yaml
13:50:15 STEP: Deleting namespace 202108051349k8sdatapathconfigencapsulationcheckconnectivitywith
13:50:29 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|1a4b6a5a_K8sDatapathConfig_Encapsulation_Check_connectivity_with_sockops_and_VXLAN_encapsulation.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/768/artifact/1a4b6a5a_K8sDatapathConfig_Encapsulation_Check_connectivity_with_sockops_and_VXLAN_encapsulation.zip/1a4b6a5a_K8sDatapathConfig_Encapsulation_Check_connectivity_with_sockops_and_VXLAN_encapsulation.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/768/artifact/test_results_Cilium-PR-K8s-1.19-kernel-5.4_768_BDD-Test-PR.zip/test_results_Cilium-PR-K8s-1.19-kernel-5.4_768_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/768/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions