Skip to content

CI: K8sServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) Tests NodePort inside cluster (kube-proxy)  #21279

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) Tests NodePort inside cluster (kube-proxy) 

Failure Output

FAIL: Request from testclient-bnsmr pod to service tftp://[fd04::12]:30138/hello failed

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Request from testclient-bnsmr pod to service tftp://[fd04::12]:30138/hello failed
Expected command: kubectl exec -n default testclient-bnsmr -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30138/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-2v7tk
	 
	 Request Information:
	 	client_address=fd02::1ae
	 	client_port=49272
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/1 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=56940
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/2 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=64306
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/3 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=11846
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/5 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=35047
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/6 exit code: 0
	 
	 Hostname: testds-2v7tk
	 
	 Request Information:
	 	client_address=fd02::1ae
	 	client_port=37377
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/7 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=18765
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/8 exit code: 0
	 
	 Hostname: testds-2v7tk
	 
	 Request Information:
	 	client_address=fd02::1ae
	 	client_port=38400
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/9 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=5813
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/10 exit code: 0
	 failed: :14993/4=28
	 
Stderr:
 	 command terminated with exit code 42
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:524

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-tf5nx cilium-wpznv]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::allow-all-within-namespace 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
app3-5d69599cdd-4w5t4                   
testds-bj7qj                            
grafana-d69c97b9b-j7dsz                 
prometheus-655fb888d7-7tgfs             
testclient-gxmhr                        
coredns-7c74c644b-zl59r                 
app1-786c6d794d-m85tz                   
echo-8fd54d9fd-74pdc                    
test-k8s2-79ff876c9d-9c7m4              
testclient-bnsmr                        
testds-2v7tk                            
app1-786c6d794d-dtlcz                   
echo-8fd54d9fd-wbrhs                    
app2-58757b7dd5-6hf5p                   
Cilium agent 'cilium-tf5nx': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 60 Failed 0
Cilium agent 'cilium-wpznv': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 50 Failed 0


Standard Error

Click to show.
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:30138/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.111.117.166:10069/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:31192"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd03::621c]:10080"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:31192"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.111.117.166:10080"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:31192"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:30502/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:30502/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:31192"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:31192"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30502/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd03::621c]:10069/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:31192"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:31154"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30502/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:30502/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::11]:30138/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:30502/hello"
15:14:47 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:31154"
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://[::ffff:192.168.56.12]:31192
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://[fd04::11]:30138/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://[fd03::621c]:10069/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://[fd04::12]:31154
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://[::ffff:192.168.56.11]:30502/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://[::ffff:192.168.56.11]:31192
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://10.111.117.166:10069/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://10.111.117.166:10080
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://[fd04::11]:31154
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://[fd03::621c]:10080
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://[::ffff:192.168.56.12]:30502/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://192.168.56.12:30502/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://[fd04::12]:30138/hello
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://192.168.56.11:31192
15:14:47 STEP: Making 10 curl requests from testclient-bnsmr pod to service tftp://192.168.56.11:30502/hello
15:14:48 STEP: Making 10 curl requests from testclient-bnsmr pod to service http://192.168.56.12:31192
15:14:48 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://[::ffff:192.168.56.12]:31192
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://[fd04::11]:30138/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://[fd03::621c]:10069/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://[fd04::12]:31154
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://[fd04::11]:31154
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://[fd03::621c]:10080
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://192.168.56.12:31192
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://192.168.56.12:30502/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://192.168.56.11:31192
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://192.168.56.11:30502/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://10.111.117.166:10080
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://[::ffff:192.168.56.12]:30502/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://[::ffff:192.168.56.11]:30502/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service tftp://10.111.117.166:10069/hello
15:14:49 STEP: Making 10 curl requests from testclient-gxmhr pod to service http://[::ffff:192.168.56.11]:31192
FAIL: Request from testclient-bnsmr pod to service tftp://[fd04::12]:30138/hello failed
Expected command: kubectl exec -n default testclient-bnsmr -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30138/hello -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi' 
To succeed, but it failed:
Exitcode: 42 
Err: exit status 42
Stdout:
 	 
	 Hostname: testds-2v7tk
	 
	 Request Information:
	 	client_address=fd02::1ae
	 	client_port=49272
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/1 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=56940
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/2 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=64306
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/3 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=11846
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/5 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=35047
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/6 exit code: 0
	 
	 Hostname: testds-2v7tk
	 
	 Request Information:
	 	client_address=fd02::1ae
	 	client_port=37377
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/7 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=18765
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/8 exit code: 0
	 
	 Hostname: testds-2v7tk
	 
	 Request Information:
	 	client_address=fd02::1ae
	 	client_port=38400
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/9 exit code: 0
	 
	 Hostname: testds-bj7qj
	 
	 Request Information:
	 	client_address=fd04::12
	 	client_port=5813
	 	real path=/hello
	 	request_scheme=tftp
	 
	 Test round 14993/10 exit code: 0
	 failed: :14993/4=28
	 
Stderr:
 	 command terminated with exit code 42
	 

=== Test Finished at 2022-09-09T15:14:53Z====
15:14:53 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
15:14:53 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-d69c97b9b-j7dsz           1/1     Running   0          4m17s   10.0.1.134      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-7tgfs       1/1     Running   0          4m17s   10.0.1.217      k8s2   <none>           <none>
	 default             app1-786c6d794d-dtlcz             2/2     Running   0          3m22s   10.0.0.32       k8s1   <none>           <none>
	 default             app1-786c6d794d-m85tz             2/2     Running   0          3m22s   10.0.0.118      k8s1   <none>           <none>
	 default             app2-58757b7dd5-6hf5p             1/1     Running   0          3m22s   10.0.0.116      k8s1   <none>           <none>
	 default             app3-5d69599cdd-4w5t4             1/1     Running   0          3m22s   10.0.0.250      k8s1   <none>           <none>
	 default             echo-8fd54d9fd-74pdc              2/2     Running   0          3m21s   10.0.1.117      k8s2   <none>           <none>
	 default             echo-8fd54d9fd-wbrhs              2/2     Running   0          3m21s   10.0.0.146      k8s1   <none>           <none>
	 default             test-k8s2-79ff876c9d-9c7m4        2/2     Running   0          3m21s   10.0.1.96       k8s2   <none>           <none>
	 default             testclient-bnsmr                  1/1     Running   0          3m21s   10.0.0.197      k8s1   <none>           <none>
	 default             testclient-gxmhr                  1/1     Running   0          3m21s   10.0.1.248      k8s2   <none>           <none>
	 default             testds-2v7tk                      2/2     Running   0          3m21s   10.0.0.100      k8s1   <none>           <none>
	 default             testds-bj7qj                      2/2     Running   0          3m21s   10.0.1.209      k8s2   <none>           <none>
	 kube-system         cilium-operator-dcd54cb6b-cdghr   1/1     Running   0          105s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-dcd54cb6b-jsk5d   1/1     Running   0          105s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-tf5nx                      1/1     Running   0          105s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-wpznv                      1/1     Running   0          105s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         coredns-7c74c644b-zl59r           1/1     Running   0          3m35s   10.0.0.207      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                         1/1     Running   0          11m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1               1/1     Running   1          11m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1      1/1     Running   2          11m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-9s4kp                  1/1     Running   0          4m59s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-proxy-n4q5h                  1/1     Running   0          11m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1               1/1     Running   2          11m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-6bb4r                1/1     Running   0          4m20s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-lktgq                1/1     Running   0          4m20s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-6l7pp              1/1     Running   0          4m56s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-pmr7g              1/1     Running   0          4m56s   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-tf5nx cilium-wpznv]
cmd: kubectl exec -n kube-system cilium-tf5nx -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                   
	 1    10.96.0.10:53          ClusterIP      1 => 10.0.0.207:53        
	 2    10.96.0.10:9153        ClusterIP      1 => 10.0.0.207:9153      
	 3    10.109.42.113:3000     ClusterIP      1 => 10.0.1.134:3000      
	 4    10.105.118.67:9090     ClusterIP      1 => 10.0.1.217:9090      
	 6    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443   
	 7    10.100.166.116:80      ClusterIP      1 => 10.0.0.118:80        
	                                            2 => 10.0.0.32:80         
	 8    10.100.166.116:69      ClusterIP      1 => 10.0.0.118:69        
	                                            2 => 10.0.0.32:69         
	 9    10.96.170.69:80        ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 10   10.96.170.69:69        ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 11   10.111.117.166:10080   ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 12   10.111.117.166:10069   ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 13   10.100.193.26:10069    ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 14   10.100.193.26:10080    ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 15   10.102.223.54:10080    ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 16   10.102.223.54:10069    ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 17   10.108.52.158:10069    ClusterIP      1 => 10.0.1.96:69         
	 18   10.108.52.158:10080    ClusterIP      1 => 10.0.1.96:80         
	 19   10.106.102.17:10069    ClusterIP      1 => 10.0.1.96:69         
	 20   10.106.102.17:10080    ClusterIP      1 => 10.0.1.96:80         
	 21   10.108.208.113:80      ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 22   10.97.241.32:80        ClusterIP      1 => 10.0.1.96:80         
	 23   10.101.31.149:20080    ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 24   10.101.31.149:20069    ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 25   10.99.162.123:69       ClusterIP      1 => 10.0.1.117:69        
	                                            2 => 10.0.0.146:69        
	 26   10.99.162.123:80       ClusterIP      1 => 10.0.1.117:80        
	                                            2 => 10.0.0.146:80        
	 27   [fd03::fc45]:69        ClusterIP      1 => [fd02::59]:69        
	                                            2 => [fd02::c2]:69        
	 28   [fd03::fc45]:80        ClusterIP      1 => [fd02::59]:80        
	                                            2 => [fd02::c2]:80        
	 29   [fd03::716d]:80        ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 30   [fd03::716d]:69        ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 31   [fd03::621c]:10080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 32   [fd03::621c]:10069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 33   [fd03::6750]:10080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 34   [fd03::6750]:10069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 35   [fd03::22c5]:10080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 36   [fd03::22c5]:10069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 37   [fd03::7b9d]:10080     ClusterIP      1 => [fd02::118]:80       
	 38   [fd03::7b9d]:10069     ClusterIP      1 => [fd02::118]:69       
	 39   [fd03::d5b7]:10080     ClusterIP      1 => [fd02::118]:80       
	 40   [fd03::d5b7]:10069     ClusterIP      1 => [fd02::118]:69       
	 41   [fd03::5e3c]:20069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 42   [fd03::5e3c]:20080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 43   [fd03::c138]:80        ClusterIP      1 => [fd02::1fa]:80       
	                                            2 => [fd02::15]:80        
	 44   [fd03::c138]:69        ClusterIP      1 => [fd02::1fa]:69       
	                                            2 => [fd02::15]:69        
	 45   10.109.104.18:80       ClusterIP      1 => 10.0.1.117:80        
	                                            2 => 10.0.0.146:80        
	 46   10.109.104.18:69       ClusterIP      1 => 10.0.1.117:69        
	                                            2 => 10.0.0.146:69        
	 47   [fd03::7c7f]:80        ClusterIP      1 => [fd02::1fa]:80       
	                                            2 => [fd02::15]:80        
	 48   [fd03::7c7f]:69        ClusterIP      1 => [fd02::1fa]:69       
	                                            2 => [fd02::15]:69        
	 49   10.106.33.228:443      ClusterIP      1 => 192.168.56.11:4244   
	                                            2 => 192.168.56.12:4244   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-tf5nx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                            IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                 
	 180        Disabled           Disabled          5982       k8s:id=app1                                            fd02::c2   10.0.0.32    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                               
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:zgroup=testapp                                                                     
	 863        Disabled           Disabled          4          reserved:health                                        fd02::cc   10.0.0.114   ready   
	 1671       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                     ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                              
	                                                            k8s:node-role.kubernetes.io/master                                                     
	                                                            reserved:host                                                                          
	 1942       Disabled           Disabled          64415      k8s:id=app3                                            fd02::1    10.0.0.250   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                               
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                        
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:zgroup=testapp                                                                     
	 2038       Disabled           Disabled          19462      k8s:io.cilium.k8s.policy.cluster=default               fd02::a3   10.0.0.197   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                        
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:zgroup=testDSClient                                                                
	 2273       Disabled           Disabled          5982       k8s:id=app1                                            fd02::59   10.0.0.118   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                               
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app1-account                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:zgroup=testapp                                                                     
	 3168       Enabled            Enabled           62942      k8s:io.cilium.k8s.policy.cluster=default               fd02::15   10.0.0.146   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                        
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:name=echo                                                                          
	 3302       Disabled           Disabled          1390       k8s:appSecond=true                                     fd02::24   10.0.0.116   ready   
	                                                            k8s:id=app2                                                                            
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                               
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=app2-account                                   
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:zgroup=testapp                                                                     
	 3620       Disabled           Disabled          9797       k8s:io.cilium.k8s.policy.cluster=default               fd02::73   10.0.0.207   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                        
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                            
	                                                            k8s:k8s-app=kube-dns                                                                   
	 3707       Disabled           Disabled          18530      k8s:io.cilium.k8s.policy.cluster=default               fd02::e0   10.0.0.100   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                        
	                                                            k8s:io.kubernetes.pod.namespace=default                                                
	                                                            k8s:zgroup=testDS                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wpznv -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend               Service Type   Backend                   
	 2    10.96.0.1:443          ClusterIP      1 => 192.168.56.11:6443   
	 3    10.96.0.10:53          ClusterIP      1 => 10.0.0.207:53        
	 4    10.96.0.10:9153        ClusterIP      1 => 10.0.0.207:9153      
	 5    10.109.42.113:3000     ClusterIP      1 => 10.0.1.134:3000      
	 6    10.105.118.67:9090     ClusterIP      1 => 10.0.1.217:9090      
	 7    10.100.166.116:80      ClusterIP      1 => 10.0.0.118:80        
	                                            2 => 10.0.0.32:80         
	 8    10.100.166.116:69      ClusterIP      1 => 10.0.0.118:69        
	                                            2 => 10.0.0.32:69         
	 9    10.96.170.69:80        ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 10   10.96.170.69:69        ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 11   10.111.117.166:10080   ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 12   10.111.117.166:10069   ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 13   10.100.193.26:10080    ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 14   10.100.193.26:10069    ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 15   10.102.223.54:10069    ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 16   10.102.223.54:10080    ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 17   10.108.52.158:10069    ClusterIP      1 => 10.0.1.96:69         
	 18   10.108.52.158:10080    ClusterIP      1 => 10.0.1.96:80         
	 19   10.106.102.17:10080    ClusterIP      1 => 10.0.1.96:80         
	 20   10.106.102.17:10069    ClusterIP      1 => 10.0.1.96:69         
	 21   10.108.208.113:80      ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 22   10.97.241.32:80        ClusterIP      1 => 10.0.1.96:80         
	 23   10.101.31.149:20080    ClusterIP      1 => 10.0.0.100:80        
	                                            2 => 10.0.1.209:80        
	 24   10.101.31.149:20069    ClusterIP      1 => 10.0.0.100:69        
	                                            2 => 10.0.1.209:69        
	 25   10.99.162.123:80       ClusterIP      1 => 10.0.1.117:80        
	                                            2 => 10.0.0.146:80        
	 26   10.99.162.123:69       ClusterIP      1 => 10.0.1.117:69        
	                                            2 => 10.0.0.146:69        
	 27   [fd03::fc45]:80        ClusterIP      1 => [fd02::59]:80        
	                                            2 => [fd02::c2]:80        
	 28   [fd03::fc45]:69        ClusterIP      1 => [fd02::59]:69        
	                                            2 => [fd02::c2]:69        
	 29   [fd03::716d]:80        ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 30   [fd03::716d]:69        ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 31   [fd03::621c]:10080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 32   [fd03::621c]:10069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 33   [fd03::6750]:10069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 34   [fd03::6750]:10080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 35   [fd03::22c5]:10080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 36   [fd03::22c5]:10069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 37   [fd03::7b9d]:10069     ClusterIP      1 => [fd02::118]:69       
	 38   [fd03::7b9d]:10080     ClusterIP      1 => [fd02::118]:80       
	 39   [fd03::d5b7]:10080     ClusterIP      1 => [fd02::118]:80       
	 40   [fd03::d5b7]:10069     ClusterIP      1 => [fd02::118]:69       
	 41   [fd03::5e3c]:20080     ClusterIP      1 => [fd02::e0]:80        
	                                            2 => [fd02::1fd]:80       
	 42   [fd03::5e3c]:20069     ClusterIP      1 => [fd02::e0]:69        
	                                            2 => [fd02::1fd]:69       
	 43   [fd03::c138]:80        ClusterIP      1 => [fd02::1fa]:80       
	                                            2 => [fd02::15]:80        
	 44   [fd03::c138]:69        ClusterIP      1 => [fd02::1fa]:69       
	                                            2 => [fd02::15]:69        
	 45   10.109.104.18:80       ClusterIP      1 => 10.0.1.117:80        
	                                            2 => 10.0.0.146:80        
	 46   10.109.104.18:69       ClusterIP      1 => 10.0.1.117:69        
	                                            2 => 10.0.0.146:69        
	 47   [fd03::7c7f]:69        ClusterIP      1 => [fd02::1fa]:69       
	                                            2 => [fd02::15]:69        
	 48   [fd03::7c7f]:80        ClusterIP      1 => [fd02::1fa]:80       
	                                            2 => [fd02::15]:80        
	 49   10.106.33.228:443      ClusterIP      1 => 192.168.56.11:4244   
	                                            2 => 192.168.56.12:4244   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wpznv -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                    
	 75         Disabled           Disabled          18530      k8s:io.cilium.k8s.policy.cluster=default                 fd02::1fd   10.0.1.209   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:zgroup=testDS                                                                         
	 415        Disabled           Disabled          19462      k8s:io.cilium.k8s.policy.cluster=default                 fd02::173   10.0.1.248   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:zgroup=testDSClient                                                                   
	 1361       Disabled           Disabled          40288      k8s:app=prometheus                                       fd02::175   10.0.1.217   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 1460       Disabled           Disabled          21963      k8s:app=grafana                                          fd02::193   10.0.1.134   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                         
	 2689       Disabled           Disabled          65246      k8s:io.cilium.k8s.policy.cluster=default                 fd02::118   10.0.1.96    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:zgroup=test-k8s2                                                                      
	 2740       Disabled           Disabled          4          reserved:health                                          fd02::168   10.0.1.80    ready   
	 3122       Enabled            Enabled           62942      k8s:io.cilium.k8s.policy.cluster=default                 fd02::1fa   10.0.1.117   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                   
	                                                            k8s:name=echo                                                                             
	 3400       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                        ready   
	                                                            reserved:host                                                                             
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
15:15:04 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|46c4a870_K8sServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1402/artifact/46c4a870_K8sServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1402/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.9_1402_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/1402/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions