Skip to content

CI: K8sFQDNTest Validate that FQDN policy continues to work after being updated #18218

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sFQDNTest Validate that FQDN policy continues to work after being updated

Failure Output

FAIL: Can't connect to to a valid target when it should work

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Can't connect to to a valid target when it should work
Expected command: kubectl exec -n default app2-58757b7dd5-pvvvw -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 --retry 5 http://vagrant-cache.ci.cilium.io -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 28 
Err: exit status 28
Stdout:
 	 time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.000836'
Stderr:
 	 command terminated with exit code 28
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/fqdn.go:317

Standard Output

Click to show.
Cilium pods: [cilium-9gp4w cilium-vpsn6]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::fqdn-proxy-policy.yaml 
Endpoint Policy Enforcement:
Pod   Ingress   Egress
Cilium agent 'cilium-9gp4w': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0
Cilium agent 'cilium-vpsn6': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0


Standard Error

Click to show.
03:10:39 STEP: Validating APP2 policy connectivity
03:10:44 STEP: Updating the policy to include an extra FQDN allow statement
03:10:45 STEP: Validating APP2 policy connectivity after policy change
FAIL: Can't connect to to a valid target when it should work
Expected command: kubectl exec -n default app2-58757b7dd5-pvvvw -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 --retry 5 http://vagrant-cache.ci.cilium.io -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 28 
Err: exit status 28
Stdout:
 	 time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.000836'
Stderr:
 	 command terminated with exit code 28
	 

=== Test Finished at 2021-12-09T03:11:46Z====
===================== TEST FAILED =====================
03:11:46 STEP: Running AfterFailed block for EntireTestsuite K8sFQDNTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-bg6w8           1/1     Running   0          2m34s   10.0.1.137      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-r5ssj        1/1     Running   0          2m34s   10.0.1.199      k8s2   <none>           <none>
	 default             app1-6bf9bf9bd5-gm5fv              2/2     Running   0          87s     10.0.0.246      k8s1   <none>           <none>
	 default             app1-6bf9bf9bd5-q7csp              2/2     Running   0          87s     10.0.0.46       k8s1   <none>           <none>
	 default             app2-58757b7dd5-pvvvw              1/1     Running   0          87s     10.0.0.24       k8s1   <none>           <none>
	 default             app3-5d69599cdd-rvrg6              1/1     Running   0          87s     10.0.0.161      k8s1   <none>           <none>
	 kube-system         cilium-9gp4w                       1/1     Running   0          2m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-7ff47c6d9b-c8z7w   1/1     Running   0          2m31s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-7ff47c6d9b-nqkp7   1/1     Running   0          2m31s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-vpsn6                       1/1     Running   0          2m31s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         coredns-69b675786c-bcmnk           1/1     Running   0          96s     10.0.1.98       k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          5m16s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-8ptvj                   1/1     Running   0          3m26s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-fhlf2                   1/1     Running   0          4m59s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          5m15s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-85hkq                 1/1     Running   0          2m37s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-qvg2z                 1/1     Running   0          2m37s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-86vbk               1/1     Running   0          3m18s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-9s6md               1/1     Running   0          3m18s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-9gp4w cilium-vpsn6]
cmd: kubectl exec -n kube-system cilium-9gp4w -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                   
	 1    10.96.0.10:53         ClusterIP      1 => 10.0.1.98:53         
	 2    10.96.0.10:9153       ClusterIP      1 => 10.0.1.98:9153       
	 3    10.103.50.196:3000    ClusterIP      1 => 10.0.1.137:3000      
	 4    10.100.178.174:9090   ClusterIP      1 => 10.0.1.199:9090      
	 5    10.96.0.1:443         ClusterIP      1 => 192.168.56.11:6443   
	 6    10.101.117.150:80     ClusterIP      1 => 10.0.0.46:80         
	                                           2 => 10.0.0.246:80        
	 7    10.101.117.150:69     ClusterIP      1 => 10.0.0.46:69         
	                                           2 => 10.0.0.246:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-9gp4w -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                   
	 129        Disabled           Disabled          30402      k8s:id=app1                                                              fd02::42   10.0.0.46    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 323        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                       ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                
	                                                            k8s:node-role.kubernetes.io/master                                                                       
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                              
	                                                            reserved:host                                                                                            
	 441        Disabled           Disabled          30402      k8s:id=app1                                                              fd02::4f   10.0.0.246   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 515        Disabled           Enabled           1102       k8s:id=app3                                                              fd02::a0   10.0.0.161   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                          
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 544        Disabled           Enabled           27512      k8s:appSecond=true                                                       fd02::1a   10.0.0.24    ready   
	                                                            k8s:id=app2                                                                                              
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                 
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                                     
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                  
	                                                            k8s:zgroup=testapp                                                                                       
	 1797       Disabled           Disabled          4          reserved:health                                                          fd02::52   10.0.0.120   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vpsn6 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend              Service Type   Backend                   
	 1    10.96.0.1:443         ClusterIP      1 => 192.168.56.11:6443   
	 2    10.96.0.10:9153       ClusterIP      1 => 10.0.1.98:9153       
	 3    10.96.0.10:53         ClusterIP      1 => 10.0.1.98:53         
	 4    10.103.50.196:3000    ClusterIP      1 => 10.0.1.137:3000      
	 5    10.100.178.174:9090   ClusterIP      1 => 10.0.1.199:9090      
	 6    10.101.117.150:80     ClusterIP      1 => 10.0.0.46:80         
	                                           2 => 10.0.0.246:80        
	 7    10.101.117.150:69     ClusterIP      1 => 10.0.0.46:69         
	                                           2 => 10.0.0.246:69        
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-vpsn6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                              
	 1          Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                  ready   
	                                                            reserved:host                                                                                                       
	 344        Disabled           Disabled          47438      k8s:app=grafana                                                                    fd02::139   10.0.1.137   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 1247       Disabled           Disabled          4          reserved:health                                                                    fd02::14c   10.0.1.51    ready   
	 1344       Disabled           Disabled          47062      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system         fd02::1f2   10.0.1.98    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                         
	                                                            k8s:k8s-app=kube-dns                                                                                                
	 3229       Disabled           Disabled          8077       k8s:app=prometheus                                                                 fd02::176   10.0.1.199   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                              
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
03:11:59 STEP: Running AfterEach for block EntireTestsuite K8sFQDNTest
03:11:59 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|ec62dcff_K8sFQDNTest_Validate_that_FQDN_policy_continues_to_work_after_being_updated.zip]]
03:12:02 STEP: Running AfterAll block for EntireTestsuite K8sFQDNTest
03:12:02 STEP: Removing Cilium installation using generated helm manifest


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//92/artifact/7fcfe7ee_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//92/artifact/ec62dcff_K8sFQDNTest_Validate_that_FQDN_policy_continues_to_work_after_being_updated.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19//92/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.19_92_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.19/92/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions