Skip to content

CI: K8sDatapathServicesTest Checks N/S loadbalancing With host policy Tests NodePort #25411

@julianwiedmann

Description

@julianwiedmann

Test Name

K8sDatapathServicesTest Checks N/S loadbalancing With host policy Tests NodePort

Failure Output

Can not connect to service "tftp://[fd04::11]:30456/hello" from outside cluster (1/10)

Stack Trace

/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Can not connect to service "tftp://[fd04::11]:30456/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30456/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/service_helpers.go:242

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 3
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Auto-disabling \
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-9hrf8 cilium-ljhxb]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                         Ingress   Egress
echo-85bb976686-vfx6q       false     false
test-k8s2-f5fdd6457-7g6vk   false     false
testds-j9n2l                false     false
coredns-6d97d5ddb-2pqn9     false     false
app3-5db68b966f-gjqdk       false     false
app1-586cfd8997-pqhp5       false     false
app2-775964bd4-jsvdv        false     false
echo-85bb976686-66qzg       false     false
testclient-lhsvs            false     false
testclient-q5fnp            false     false
testds-gvv4h                false     false
app1-586cfd8997-n687t       false     false
Cilium agent 'cilium-9hrf8': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 62 Failed 0
Cilium agent 'cilium-ljhxb': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0

Standard Error

09:58:26 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathServicesTest Checks N/S loadbalancing With host policy
09:58:26 STEP: Installing Cilium
09:58:28 STEP: Waiting for Cilium to become ready
09:58:43 STEP: Validating if Kubernetes DNS is deployed
09:58:43 STEP: Checking if deployment is ready
09:58:43 STEP: Checking if kube-dns service is plumbed correctly
09:58:43 STEP: Checking if pods have identity
09:58:43 STEP: Checking if DNS can resolve
09:58:47 STEP: Kubernetes DNS is up and operational
09:58:47 STEP: Validating Cilium Installation
09:58:47 STEP: Performing Cilium controllers preflight check
09:58:47 STEP: Performing Cilium health check
09:58:47 STEP: Checking whether host EP regenerated
09:58:47 STEP: Performing Cilium status preflight check
09:58:56 STEP: Performing Cilium service preflight check
09:58:56 STEP: Performing K8s service preflight check
09:58:56 STEP: Cilium is not ready yet: host EP is not ready: cilium-agent "cilium-9hrf8" host EP is not in ready state: "regenerating"
09:58:56 STEP: Performing Cilium controllers preflight check
09:58:56 STEP: Performing Cilium health check
09:58:56 STEP: Performing Cilium status preflight check
09:58:56 STEP: Checking whether host EP regenerated
09:59:04 STEP: Performing Cilium service preflight check
09:59:04 STEP: Performing K8s service preflight check
09:59:04 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-9hrf8': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

09:59:04 STEP: Performing Cilium status preflight check
09:59:04 STEP: Performing Cilium health check
09:59:04 STEP: Performing Cilium controllers preflight check
09:59:04 STEP: Checking whether host EP regenerated
09:59:12 STEP: Performing Cilium service preflight check
09:59:12 STEP: Performing K8s service preflight check
09:59:18 STEP: Waiting for cilium-operator to be ready
09:59:18 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:59:18 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:30456/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd03::5e49]:10069/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd03::5e49]:10080"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:32585"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:31780"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.105.170.147:10080"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:31780"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://[fd04::11]:32585"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://[fd04::12]:32585"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:31780"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://192.168.56.12:31780"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://192.168.56.11:31780"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:31780"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://192.168.56.11:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:32585"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://192.168.56.12:30636/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:31780"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:31780"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://[fd04::11]:30456/hello"
09:59:29 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://[fd04::12]:30456/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::11]:30456/hello"
09:59:29 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.105.170.147:10069/hello"
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://192.168.56.11:30636/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://192.168.56.12:31780
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://192.168.56.12:30636/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://[::ffff:192.168.56.11]:30636/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://[fd04::12]:30456/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://10.105.170.147:10080
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://[fd03::5e49]:10080
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://10.105.170.147:10069/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://[fd03::5e49]:10069/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://[fd04::11]:32585
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://[::ffff:192.168.56.12]:30636/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://192.168.56.11:31780
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://[::ffff:192.168.56.12]:31780
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://[fd04::12]:32585
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service tftp://[fd04::11]:30456/hello
09:59:29 STEP: Making 10 curl requests from testclient-lhsvs pod to service http://[::ffff:192.168.56.11]:31780
09:59:30 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://192.168.56.11:30636/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://192.168.56.11:31780
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://[::ffff:192.168.56.11]:30636/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://192.168.56.12:30636/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://[fd04::12]:30456/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://[::ffff:192.168.56.11]:31780
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://[fd04::11]:30456/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://[::ffff:192.168.56.12]:31780
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://[::ffff:192.168.56.12]:30636/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://10.105.170.147:10069/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://[fd03::5e49]:10080
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://10.105.170.147:10080
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://[fd04::12]:32585
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service tftp://[fd03::5e49]:10069/hello
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://192.168.56.12:31780
09:59:31 STEP: Making 10 curl requests from testclient-q5fnp pod to service http://[fd04::11]:32585
FAIL: Can not connect to service "tftp://[fd04::11]:30456/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:30456/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "tftp://192.168.56.12:30636/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://192.168.56.12:30636/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "tftp://192.168.56.11:30636/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://192.168.56.11:30636/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "http://[fd04::12]:32585" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::12]:32585 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "http://[fd04::11]:32585" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:32585 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "http://192.168.56.11:31780" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.11:31780 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "tftp://[fd04::12]:30456/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:30456/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

FAIL: Can not connect to service "http://192.168.56.12:31780" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-gnfp8 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.12:31780 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" 
To succeed, but it failed:
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
	 

=== Test Finished at 2023-05-12T10:00:03Z====
10:00:03 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
10:00:04 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-67ff49cd99-r5hx8           0/1     Running   0          79m     10.0.0.155      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-8c7df94b4-fmqx8         1/1     Running   0          79m     10.0.0.211      k8s1   <none>           <none>
	 default             app1-586cfd8997-n687t              1/2     Running   0          24m     10.0.1.245      k8s1   <none>           <none>
	 default             app1-586cfd8997-pqhp5              1/2     Running   0          24m     10.0.1.24       k8s1   <none>           <none>
	 default             app2-775964bd4-jsvdv               1/1     Running   0          24m     10.0.1.138      k8s1   <none>           <none>
	 default             app3-5db68b966f-gjqdk              1/1     Running   0          24m     10.0.1.239      k8s1   <none>           <none>
	 default             echo-85bb976686-66qzg              2/2     Running   0          24m     10.0.0.127      k8s2   <none>           <none>
	 default             echo-85bb976686-vfx6q              2/2     Running   0          24m     10.0.1.122      k8s1   <none>           <none>
	 default             test-k8s2-f5fdd6457-7g6vk          1/2     Running   0          24m     10.0.0.96       k8s2   <none>           <none>
	 default             testclient-lhsvs                   1/1     Running   0          24m     10.0.1.103      k8s1   <none>           <none>
	 default             testclient-q5fnp                   1/1     Running   0          24m     10.0.0.187      k8s2   <none>           <none>
	 default             testds-gvv4h                       2/2     Running   0          23m     10.0.0.250      k8s2   <none>           <none>
	 default             testds-j9n2l                       2/2     Running   0          24m     10.0.1.37       k8s1   <none>           <none>
	 kube-system         cilium-9hrf8                       1/1     Running   0          101s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-ljhxb                       1/1     Running   0          101s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-5785c4c6f6-hstsc   1/1     Running   0          101s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-5785c4c6f6-nb8g7   1/1     Running   0          101s    192.168.56.13   k8s3   <none>           <none>
	 kube-system         coredns-6d97d5ddb-2pqn9            1/1     Running   0          7m18s   10.0.1.104      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          87m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          87m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          87m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          87m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-5zqjf                 1/1     Running   0          79m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-gnfp8                 1/1     Running   0          79m     192.168.56.13   k8s3   <none>           <none>
	 kube-system         log-gatherer-n59hd                 1/1     Running   0          79m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-4j8qq               1/1     Running   0          80m     192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-mvptc               1/1     Running   0          80m     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-tsbg5               1/1     Running   0          80m     192.168.56.13   k8s3   <none>           <none>
	 
Stderr:

Resources

Anything else?

No response

Metadata

Metadata

Assignees

Labels

area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions