-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sDatapathServicesTest Checks N/S loadbalancing With host policy Tests NodePort
Failure Output
FAIL: Can not connect to service "http://192.168.56.11:30428" from outside cluster (1/10)
Stack Trace
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Can not connect to service "http://192.168.56.11:30428" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.11:30428 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/service_helpers.go:242
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Auto-disabling \
UpdateIdentities: Skipping Delete of a non-existing identity
Cilium pods: [cilium-7l5bd cilium-r87kc]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
app1-586cfd8997-6d7l2 false false
app1-586cfd8997-xrq55 false false
app2-775964bd4-gsvlg false false
echo-85bb976686-pgg4f false false
echo-85bb976686-pmbnl false false
test-k8s2-f5fdd6457-g69kj false false
testclient-gfxfw false false
testclient-zl822 false false
testds-khmjv false false
app3-5db68b966f-nxsrj false false
testds-flt6r false false
coredns-6d97d5ddb-7hgq9 false false
Cilium agent 'cilium-7l5bd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
Cilium agent 'cilium-r87kc': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 62 Failed 0
Standard Error
07:21:26 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathServicesTest Checks N/S loadbalancing With host policy
07:21:26 STEP: Installing Cilium
07:21:29 STEP: Waiting for Cilium to become ready
07:21:44 STEP: Validating if Kubernetes DNS is deployed
07:21:44 STEP: Checking if deployment is ready
07:21:44 STEP: Checking if kube-dns service is plumbed correctly
07:21:44 STEP: Checking if DNS can resolve
07:21:44 STEP: Checking if pods have identity
07:21:48 STEP: Kubernetes DNS is up and operational
07:21:48 STEP: Validating Cilium Installation
07:21:48 STEP: Performing Cilium controllers preflight check
07:21:48 STEP: Performing Cilium health check
07:21:48 STEP: Performing Cilium status preflight check
07:21:48 STEP: Checking whether host EP regenerated
07:21:57 STEP: Performing Cilium service preflight check
07:21:57 STEP: Performing K8s service preflight check
07:21:58 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-5p2tz': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
07:21:58 STEP: Performing Cilium controllers preflight check
07:21:58 STEP: Performing Cilium status preflight check
07:21:58 STEP: Performing Cilium health check
07:21:58 STEP: Checking whether host EP regenerated
07:22:06 STEP: Performing Cilium service preflight check
07:22:06 STEP: Performing K8s service preflight check
07:22:06 STEP: Performing Cilium controllers preflight check
07:22:06 STEP: Performing Cilium status preflight check
07:22:06 STEP: Performing Cilium health check
07:22:06 STEP: Checking whether host EP regenerated
07:22:20 STEP: Cilium is not ready yet: unable to fill service cache: Unable to unmarshal Cilium services: unexpected end of JSON input
07:22:20 STEP: Performing Cilium controllers preflight check
07:22:20 STEP: Performing Cilium health check
07:22:20 STEP: Checking whether host EP regenerated
07:22:20 STEP: Performing Cilium status preflight check
07:22:28 STEP: Performing Cilium service preflight check
07:22:28 STEP: Performing K8s service preflight check
07:22:34 STEP: Waiting for cilium-operator to be ready
07:22:34 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
07:22:34 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
07:22:34 STEP: Installing Cilium
07:22:36 STEP: Waiting for Cilium to become ready
07:22:51 STEP: Validating if Kubernetes DNS is deployed
07:22:51 STEP: Checking if deployment is ready
07:22:51 STEP: Checking if kube-dns service is plumbed correctly
07:22:51 STEP: Checking if pods have identity
07:22:51 STEP: Checking if DNS can resolve
07:22:55 STEP: Kubernetes DNS is up and operational
07:22:55 STEP: Validating Cilium Installation
07:22:55 STEP: Performing Cilium controllers preflight check
07:22:55 STEP: Performing Cilium status preflight check
07:22:55 STEP: Performing Cilium health check
07:22:55 STEP: Checking whether host EP regenerated
07:23:04 STEP: Performing Cilium service preflight check
07:23:04 STEP: Performing K8s service preflight check
07:23:04 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-7l5bd': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
07:23:04 STEP: Performing Cilium controllers preflight check
07:23:04 STEP: Performing Cilium status preflight check
07:23:04 STEP: Performing Cilium health check
07:23:04 STEP: Checking whether host EP regenerated
07:23:12 STEP: Performing Cilium service preflight check
07:23:12 STEP: Performing K8s service preflight check
07:23:21 STEP: Cilium is not ready yet: host EP is not ready: cilium-agent "cilium-7l5bd" host EP is not in ready state: "regenerating"
07:23:21 STEP: Performing Cilium status preflight check
07:23:21 STEP: Performing Cilium health check
07:23:21 STEP: Performing Cilium controllers preflight check
07:23:21 STEP: Checking whether host EP regenerated
07:23:28 STEP: Performing Cilium service preflight check
07:23:28 STEP: Performing K8s service preflight check
07:23:35 STEP: Waiting for cilium-operator to be ready
07:23:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
07:23:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
07:23:35 STEP: Making sure all endpoints are in ready state
07:23:38 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml for 1min
07:24:58 STEP: Deleted the policies, waiting for connection terminations
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.11]:31743/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:30428"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:31974"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://192.168.56.12:30428"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.108.39.63:10080"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:31743/hello"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://[fd04::11]:31974"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.56.12]:31743/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd03::9ce6]:10080"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:31743/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:31974"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.11]:30428"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd03::9ce6]:10069/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:31743/hello"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://[fd04::11]:32229/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:31743/hello"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://192.168.56.12:31743/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:30428"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://[fd04::12]:32229/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.108.39.63:10069/hello"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://192.168.56.11:30428"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:30428"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:32229/hello"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "tftp://192.168.56.11:31743/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::11]:32229/hello"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.56.12]:30428"
07:25:38 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:30428"
07:25:38 STEP: Making 10 HTTP requests from outside cluster (using port 0) to "http://[fd04::12]:31974"
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://[::ffff:192.168.56.11]:30428
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://[fd04::12]:32229/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://[::ffff:192.168.56.11]:31743/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://[::ffff:192.168.56.12]:31743/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://192.168.56.11:31743/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://10.108.39.63:10069/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://[::ffff:192.168.56.12]:30428
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://192.168.56.12:30428
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://[fd03::9ce6]:10080
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://[fd03::9ce6]:10069/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://[fd04::12]:31974
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://[fd04::11]:32229/hello
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://192.168.56.11:30428
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://10.108.39.63:10080
07:25:38 STEP: Making 10 curl requests from testclient-gfxfw pod to service http://[fd04::11]:31974
07:25:39 STEP: Making 10 curl requests from testclient-gfxfw pod to service tftp://192.168.56.12:31743/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://[::ffff:192.168.56.12]:31743/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://[fd04::12]:32229/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://[::ffff:192.168.56.11]:30428
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://10.108.39.63:10069/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://192.168.56.11:30428
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://[::ffff:192.168.56.11]:31743/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://[fd04::11]:32229/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://[::ffff:192.168.56.12]:30428
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://[fd04::11]:31974
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://192.168.56.12:31743/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://[fd04::12]:31974
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://192.168.56.11:31743/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service tftp://[fd03::9ce6]:10069/hello
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://192.168.56.12:30428
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://10.108.39.63:10080
07:25:40 STEP: Making 10 curl requests from testclient-zl822 pod to service http://[fd03::9ce6]:10080
FAIL: Can not connect to service "http://192.168.56.11:30428" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.11:30428 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "http://192.168.56.12:30428" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.56.12:30428 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "tftp://192.168.56.12:31743/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://192.168.56.12:31743/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "tftp://[fd04::11]:32229/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::11]:32229/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "http://[fd04::12]:31974" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::12]:31974 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "http://[fd04::11]:31974" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:31974 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "tftp://[fd04::12]:32229/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://[fd04::12]:32229/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
FAIL: Can not connect to service "tftp://192.168.56.11:31743/hello" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-ck9r2 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 tftp://192.168.56.11:31743/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy)
=== Test Finished at 2023-05-18T07:26:12Z====
07:26:12 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
07:26:13 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-67ff49cd99-fwxqr 0/1 Running 0 87m 10.0.0.242 k8s1 <none> <none>
cilium-monitoring prometheus-8c7df94b4-rrvhx 1/1 Running 0 87m 10.0.0.111 k8s1 <none> <none>
default app1-586cfd8997-6d7l2 1/2 Running 0 27m 10.0.0.81 k8s1 <none> <none>
default app1-586cfd8997-xrq55 1/2 Running 0 27m 10.0.0.202 k8s1 <none> <none>
default app2-775964bd4-gsvlg 1/1 Running 0 27m 10.0.0.4 k8s1 <none> <none>
default app3-5db68b966f-nxsrj 1/1 Running 0 27m 10.0.0.134 k8s1 <none> <none>
default echo-85bb976686-pgg4f 2/2 Running 0 27m 10.0.1.37 k8s2 <none> <none>
default echo-85bb976686-pmbnl 2/2 Running 0 27m 10.0.0.139 k8s1 <none> <none>
default test-k8s2-f5fdd6457-g69kj 1/2 Running 0 27m 10.0.1.120 k8s2 <none> <none>
default testclient-gfxfw 1/1 Running 0 27m 10.0.1.168 k8s2 <none> <none>
default testclient-zl822 1/1 Running 0 27m 10.0.0.55 k8s1 <none> <none>
default testds-flt6r 2/2 Running 0 27m 10.0.1.95 k8s2 <none> <none>
default testds-khmjv 2/2 Running 0 27m 10.0.0.97 k8s1 <none> <none>
kube-system cilium-7l5bd 1/1 Running 0 3m44s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-5ddb78f75f-h8ld2 1/1 Running 0 3m44s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-5ddb78f75f-nxsj5 1/1 Running 0 3m44s 192.168.56.13 k8s3 <none> <none>
kube-system cilium-r87kc 1/1 Running 0 3m44s 192.168.56.11 k8s1 <none> <none>
kube-system coredns-6d97d5ddb-7hgq9 1/1 Running 1 (106s ago) 29m 10.0.0.133 k8s1 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 94m 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 94m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 94m 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 94m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-bwxjk 1/1 Running 0 87m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-ck9r2 1/1 Running 0 87m 192.168.56.13 k8s3 <none> <none>
kube-system log-gatherer-hccvr 1/1 Running 0 87m 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-hjlcb 1/1 Running 0 88m 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-pjs6r 1/1 Running 0 88m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-ptrvr 1/1 Running 0 88m 192.168.56.13 k8s3 <none> <none>
Stderr:
Fetching command output from pods [cilium-7l5bd cilium-r87kc]
cmd: kubectl exec -n kube-system cilium-7l5bd -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.108.60.70:9090 ClusterIP 1 => 10.0.0.111:9090 (active)
2 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443 (active)
3 10.96.0.10:53 ClusterIP 1 => 10.0.0.133:53 (active)
4 10.96.0.10:9153 ClusterIP 1 => 10.0.0.133:9153 (active)
6 10.110.104.90:3000 ClusterIP
8 10.108.15.155:80 ClusterIP
9 10.108.15.155:69 ClusterIP
10 10.100.172.92:80 ClusterIP 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
11 10.100.172.92:69 ClusterIP 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
12 10.108.39.63:10080 ClusterIP 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
13 10.108.39.63:10069 ClusterIP 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
15 192.168.56.12:30428 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
17 0.0.0.0:30428 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
18 0.0.0.0:31743 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
21 192.168.56.12:31743 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
22 10.107.60.6:10080 ClusterIP 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
23 10.107.60.6:10069 ClusterIP 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
26 192.168.56.12:31060 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
27 0.0.0.0:31060 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
30 192.168.56.12:31172 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
31 0.0.0.0:31172 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
32 10.103.197.240:10080 ClusterIP 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
33 10.103.197.240:10069 ClusterIP 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
36 192.168.56.12:30639 NodePort 1 => 10.0.1.95:69 (active)
37 192.168.56.12:30639/i NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
38 0.0.0.0:30639 NodePort 1 => 10.0.1.95:69 (active)
39 0.0.0.0:30639/i NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
42 192.168.56.12:30352 NodePort 1 => 10.0.1.95:80 (active)
43 192.168.56.12:30352/i NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
44 0.0.0.0:30352 NodePort 1 => 10.0.1.95:80 (active)
45 0.0.0.0:30352/i NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
50 10.100.61.41:10069 ClusterIP
51 10.100.61.41:10080 ClusterIP
54 192.168.56.12:30643 NodePort
55 192.168.56.12:30643/i NodePort
56 0.0.0.0:30643 NodePort
57 0.0.0.0:30643/i NodePort
60 0.0.0.0:30367 NodePort
61 0.0.0.0:30367/i NodePort
62 192.168.56.12:30367 NodePort
63 192.168.56.12:30367/i NodePort
68 10.108.233.96:10080 ClusterIP
69 10.108.233.96:10069 ClusterIP
72 192.168.56.12:30262 NodePort
73 0.0.0.0:30262 NodePort
76 192.168.56.12:30304 NodePort
77 0.0.0.0:30304 NodePort
78 10.109.48.176:80 ClusterIP 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
81 192.168.56.12:31855 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
82 0.0.0.0:31855 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
83 10.110.14.252:80 ClusterIP
88 192.168.56.12:31837 NodePort
89 192.168.56.12:31837/i NodePort
90 0.0.0.0:31837 NodePort
91 0.0.0.0:31837/i NodePort
92 10.108.3.59:20080 ClusterIP 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
93 10.108.3.59:20069 ClusterIP 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
94 192.0.2.233:20080 ExternalIPs 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
95 192.0.2.233:20069 ExternalIPs 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
97 192.168.56.12:31453 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
99 0.0.0.0:31453 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
102 192.168.56.12:31105 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
103 0.0.0.0:31105 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
104 10.97.102.127:80 ClusterIP 1 => 10.0.1.37:80 (active)
2 => 10.0.0.139:80 (active)
105 10.97.102.127:69 ClusterIP 1 => 10.0.1.37:69 (active)
2 => 10.0.0.139:69 (active)
107 192.168.56.12:32559 NodePort 1 => 10.0.1.37:80 (active)
2 => 10.0.0.139:80 (active)
109 0.0.0.0:32559 NodePort 1 => 10.0.1.37:80 (active)
2 => 10.0.0.139:80 (active)
110 192.168.56.12:30045 NodePort 1 => 10.0.1.37:69 (active)
2 => 10.0.0.139:69 (active)
111 0.0.0.0:30045 NodePort 1 => 10.0.1.37:69 (active)
2 => 10.0.0.139:69 (active)
114 [fd03::6c63]:69 ClusterIP 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
115 [fd03::6c63]:80 ClusterIP 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
116 [fd03::9ce6]:10080 ClusterIP 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
117 [fd03::9ce6]:10069 ClusterIP 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
118 [fd04::12]:31974 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
120 [::]:31974 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
122 [fd04::12]:32229 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
123 [::]:32229 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
124 [fd03::101e]:10080 ClusterIP 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
125 [fd03::101e]:10069 ClusterIP 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
127 [fd04::12]:30452 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
128 [::]:30452 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
130 [fd04::12]:31151 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
131 [::]:31151 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
132 [fd03::a486]:10080 ClusterIP 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
133 [fd03::a486]:10069 ClusterIP 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
136 [fd04::12]:32600 NodePort 1 => [fd02::1bf]:80 (active)
137 [fd04::12]:32600/i NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
138 [::]:32600 NodePort 1 => [fd02::1bf]:80 (active)
139 [::]:32600/i NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
142 [fd04::12]:32159 NodePort 1 => [fd02::1bf]:69 (active)
143 [fd04::12]:32159/i NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
144 [::]:32159 NodePort 1 => [fd02::1bf]:69 (active)
145 [::]:32159/i NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
146 [fd03::a390]:10069 ClusterIP
147 [fd03::a390]:10080 ClusterIP
150 [fd04::12]:30175 NodePort
151 [fd04::12]:30175/i NodePort
152 [::]:30175 NodePort
153 [::]:30175/i NodePort
156 [fd04::12]:30328 NodePort
157 [fd04::12]:30328/i NodePort
158 [::]:30328 NodePort
159 [::]:30328/i NodePort
160 [fd03::48ac]:10080 ClusterIP
161 [fd03::48ac]:10069 ClusterIP
163 [fd04::12]:31174 NodePort
164 [::]:31174 NodePort
166 [fd04::12]:32081 NodePort
167 [::]:32081 NodePort
168 [fd03::5fa1]:20080 ClusterIP 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
169 [fd03::5fa1]:20069 ClusterIP 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
170 [fd03::999]:20069 ExternalIPs 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
171 [fd03::999]:20080 ExternalIPs 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
173 [fd04::12]:30036 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
174 [::]:30036 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
175 [::]:32166 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
176 [fd04::12]:32166 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
179 192.168.56.12:8080 HostPort 1 => 10.0.1.120:80 (active)
181 0.0.0.0:8080 HostPort 1 => 10.0.1.120:80 (active)
183 [fd04::12]:8080 HostPort 1 => [fd02::180]:80 (active)
184 [::]:8080 HostPort 1 => [fd02::180]:80 (active)
187 192.168.56.12:6969 HostPort 1 => 10.0.1.120:69 (active)
188 0.0.0.0:6969 HostPort 1 => 10.0.1.120:69 (active)
190 [fd04::12]:6969 HostPort 1 => [fd02::180]:69 (active)
191 [::]:6969 HostPort 1 => [fd02::180]:69 (active)
192 192.168.56.11:20080 ExternalIPs 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
193 192.168.56.11:20069 ExternalIPs 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
194 [fd04::11]:20080 ExternalIPs 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
195 [fd04::11]:20069 ExternalIPs 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
196 10.0.2.15:8080 HostPort 1 => 10.0.1.120:80 (active)
199 192.168.59.15:8080 HostPort 1 => 10.0.1.120:80 (active)
200 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:31174 NodePort
202 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:8080 HostPort 1 => [fd02::180]:80 (active)
205 192.168.59.15:6969 HostPort 1 => 10.0.1.120:69 (active)
206 10.0.2.15:6969 HostPort 1 => 10.0.1.120:69 (active)
208 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:6969 HostPort 1 => [fd02::180]:69 (active)
211 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32081 NodePort
212 192.168.59.15:31855 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
213 10.0.2.15:31855 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
218 192.168.59.15:32559 NodePort 1 => 10.0.1.37:80 (active)
2 => 10.0.0.139:80 (active)
219 10.0.2.15:32559 NodePort 1 => 10.0.1.37:80 (active)
2 => 10.0.0.139:80 (active)
222 192.168.59.15:30045 NodePort 1 => 10.0.1.37:69 (active)
2 => 10.0.0.139:69 (active)
223 10.0.2.15:30045 NodePort 1 => 10.0.1.37:69 (active)
2 => 10.0.0.139:69 (active)
224 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:30175 NodePort
225 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:30175/i NodePort
228 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:30328 NodePort
229 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:30328/i NodePort
233 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32166 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
234 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:30036 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
236 10.0.2.15:31060 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
239 192.168.59.15:31060 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
242 192.168.59.15:31172 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
243 10.0.2.15:31172 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
244 10.0.2.15:30262 NodePort
247 192.168.59.15:30262 NodePort
250 192.168.59.15:30304 NodePort
251 10.0.2.15:30304 NodePort
252 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32600 NodePort 1 => [fd02::1bf]:80 (active)
253 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32600/i NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
258 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32159 NodePort 1 => [fd02::1bf]:69 (active)
259 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32159/i NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
262 192.168.59.15:30367 NodePort
263 192.168.59.15:30367/i NodePort
264 10.0.2.15:30367 NodePort
265 10.0.2.15:30367/i NodePort
270 192.168.59.15:30643 NodePort
271 192.168.59.15:30643/i NodePort
272 10.0.2.15:30643 NodePort
273 10.0.2.15:30643/i NodePort
276 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:31974 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
279 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:32229 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
280 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:31151 NodePort 1 => [fd02::d3]:80 (active)
2 => [fd02::1bf]:80 (active)
282 [fd17:625c:f037:2:a00:27ff:fef7:24d0]:30452 NodePort 1 => [fd02::d3]:69 (active)
2 => [fd02::1bf]:69 (active)
284 10.0.2.15:30428 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
287 192.168.59.15:30428 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
289 192.168.59.15:31743 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
290 10.0.2.15:31743 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
296 192.168.59.15:31837 NodePort
297 192.168.59.15:31837/i NodePort
298 10.0.2.15:31837 NodePort
299 10.0.2.15:31837/i NodePort
301 192.168.59.15:31453 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
302 10.0.2.15:31453 NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
306 192.168.59.15:31105 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
307 10.0.2.15:31105 NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
313 192.168.59.15:30352 NodePort 1 => 10.0.1.95:80 (active)
314 192.168.59.15:30352/i NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
315 10.0.2.15:30352 NodePort 1 => 10.0.1.95:80 (active)
316 10.0.2.15:30352/i NodePort 1 => 10.0.0.97:80 (active)
2 => 10.0.1.95:80 (active)
317 10.0.2.15:30639 NodePort 1 => 10.0.1.95:69 (active)
318 10.0.2.15:30639/i NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
323 192.168.59.15:30639 NodePort 1 => 10.0.1.95:69 (active)
324 192.168.59.15:30639/i NodePort 1 => 10.0.0.97:69 (active)
2 => 10.0.1.95:69 (active)
325 10.109.243.60:443 ClusterIP 1 => 192.168.56.12:4244 (active)
Stderr:
cmd: kubectl exec -n kube-system cilium-7l5bd -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
85 Disabled Disabled 30051 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::1bf 10.0.1.95 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
114 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
383 Disabled Disabled 18988 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::180 10.0.1.120 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
722 Disabled Disabled 4 reserved:health fd02::1d0 10.0.1.22 ready
1762 Disabled Disabled 30519 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::177 10.0.1.168 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
3471 Disabled Disabled 7078 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::1af 10.0.1.37 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:name=echo
...
Resources
- Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/2383/
- ZIP file(s): https://drive.google.com/file/d/1ZfUeXWd8CCG01GwOQFKv6tpyb6ZAi_wk/view?usp=sharing
Anything else?
No response
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!