-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with direct routing Tests LoadBalancer Connectivity to endpoint via LB
Failure Output
FAIL: Can not connect to service "http://192.168.1.146" from outside cluster (1/10)
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Can not connect to service "http://192.168.1.146" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-wbqvd -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.1.146 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.001761'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:299
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 7
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
hubble events queue is full; dropping messages
hubble events queue is processing messages again: 42 messages were lost
Unable to install direct node route {Ifindex: 0 Dst: fd02::100/120 Src: <nil> Gw: <nil> Flags: [] Table: 0}
hubble events queue is processing messages again: 138 messages were lost
hubble events queue is processing messages again: 554 messages were lost
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-klt49 cilium-x6r2p]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
testclient-dwcqh
testclient-szkf6
testds-89mc9
testds-txrzv
coredns-5495c8f48d-ht7hx
grafana-7fd557d749-tx7gp
prometheus-d87f8f984-jr62d
test-k8s2-5b756fd6c5-d78wc
Cilium agent 'cilium-klt49': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 39 Failed 0
Cilium agent 'cilium-x6r2p': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0
Standard Error
Click to show.
09:23:25 STEP: Running BeforeAll block for EntireTestsuite K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with direct routing Tests LoadBalancer
09:23:25 STEP: Installing Cilium
09:23:26 STEP: Waiting for Cilium to become ready
09:24:02 STEP: Validating Cilium Installation
09:24:02 STEP: Performing Cilium status preflight check
09:24:02 STEP: Performing Cilium controllers preflight check
09:24:02 STEP: Performing Cilium health check
09:24:04 STEP: Performing Cilium service preflight check
09:24:04 STEP: Performing K8s service preflight check
09:24:04 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-klt49': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
09:24:04 STEP: Performing Cilium controllers preflight check
09:24:04 STEP: Performing Cilium status preflight check
09:24:04 STEP: Performing Cilium health check
09:24:06 STEP: Performing Cilium service preflight check
09:24:06 STEP: Performing K8s service preflight check
09:24:08 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-x6r2p': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
09:24:08 STEP: Performing Cilium status preflight check
09:24:08 STEP: Performing Cilium health check
09:24:08 STEP: Performing Cilium controllers preflight check
09:24:10 STEP: Performing Cilium service preflight check
09:24:10 STEP: Performing K8s service preflight check
09:24:10 STEP: Performing Cilium status preflight check
09:24:10 STEP: Performing Cilium controllers preflight check
09:24:10 STEP: Performing Cilium health check
09:24:12 STEP: Performing Cilium service preflight check
09:24:12 STEP: Performing K8s service preflight check
09:24:12 STEP: Performing Cilium controllers preflight check
09:24:12 STEP: Performing Cilium health check
09:24:12 STEP: Performing Cilium status preflight check
09:24:14 STEP: Performing Cilium service preflight check
09:24:14 STEP: Performing K8s service preflight check
09:24:15 STEP: Waiting for cilium-operator to be ready
09:24:15 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:24:15 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
09:24:15 STEP: Waiting until the Operator has assigned the LB IP
09:24:15 STEP: Waiting until the Agents have announced the LB IP via BGP
09:24:15 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.1.146"
FAIL: Can not connect to service "http://192.168.1.146" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-wbqvd -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.1.146 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.001761'
Stderr:
command terminated with exit code 28
=== Test Finished at 2021-09-17T09:24:21Z====
09:24:21 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
09:24:21 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with direct routing Tests LoadBalancer
===================== Exiting AfterFailed =====================
===================== TEST FAILED =====================
09:24:22 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-7fd557d749-tx7gp 1/1 Running 0 76m 10.0.0.17 k8s1 <none> <none>
cilium-monitoring prometheus-d87f8f984-jr62d 1/1 Running 0 76m 10.0.0.182 k8s1 <none> <none>
default test-k8s2-5b756fd6c5-d78wc 2/2 Running 0 17m 10.0.1.200 k8s2 <none> <none>
default testclient-dwcqh 1/1 Running 0 17m 10.0.1.22 k8s2 <none> <none>
default testclient-szkf6 1/1 Running 0 17m 10.0.0.156 k8s1 <none> <none>
default testds-89mc9 2/2 Running 0 8m26s 10.0.0.16 k8s1 <none> <none>
default testds-txrzv 2/2 Running 0 8m19s 10.0.1.217 k8s2 <none> <none>
kube-system cilium-klt49 1/1 Running 0 58s 192.168.36.12 k8s2 <none> <none>
kube-system cilium-operator-6996cdc7b9-66g69 1/1 Running 0 58s 192.168.36.13 k8s3 <none> <none>
kube-system cilium-operator-6996cdc7b9-j8zff 1/1 Running 0 58s 192.168.36.11 k8s1 <none> <none>
kube-system cilium-x6r2p 1/1 Running 0 58s 192.168.36.11 k8s1 <none> <none>
kube-system coredns-5495c8f48d-ht7hx 1/1 Running 1 75m 10.0.1.250 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 79m 192.168.36.11 k8s1 <none> <none>
kube-system frr 1/1 Running 0 60s 192.168.36.13 k8s3 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 79m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 79m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 79m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-9pj5b 1/1 Running 0 76m 192.168.36.12 k8s2 <none> <none>
kube-system log-gatherer-h4cvc 1/1 Running 0 76m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-wbqvd 1/1 Running 0 76m 192.168.36.13 k8s3 <none> <none>
kube-system registry-adder-8gh9r 1/1 Running 0 77m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-pncgd 1/1 Running 0 77m 192.168.36.13 k8s3 <none> <none>
kube-system registry-adder-wt947 1/1 Running 0 77m 192.168.36.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-klt49 cilium-x6r2p]
cmd: kubectl exec -n kube-system cilium-klt49 -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.10:53 ClusterIP 1 => 10.0.1.250:53
2 10.96.0.10:9153 ClusterIP 1 => 10.0.1.250:9153
3 10.104.230.94:3000 ClusterIP 1 => 10.0.0.17:3000
4 10.111.177.35:9090 ClusterIP 1 => 10.0.0.182:9090
5 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
6 10.98.2.183:80 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
7 10.98.2.183:69 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
8 10.102.69.32:10080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
9 10.102.69.32:10069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
10 192.168.36.12:31716 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
11 0.0.0.0:31716 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
12 10.0.2.15:31716 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
13 192.168.36.12:31992 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
14 0.0.0.0:31992 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
15 10.0.2.15:31992 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
16 10.103.131.207:10080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
17 10.103.131.207:10069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
18 0.0.0.0:32701 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
19 10.0.2.15:32701 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
20 192.168.36.12:32701 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
21 0.0.0.0:32286 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
22 10.0.2.15:32286 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
23 192.168.36.12:32286 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
24 10.107.100.205:10080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
25 10.107.100.205:10069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
26 10.0.2.15:31071 NodePort 1 => 10.0.1.217:80
27 10.0.2.15:31071/i NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
28 192.168.36.12:31071 NodePort 1 => 10.0.1.217:80
29 192.168.36.12:31071/i NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
30 0.0.0.0:31071 NodePort 1 => 10.0.1.217:80
31 0.0.0.0:31071/i NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
32 10.0.2.15:30518 NodePort 1 => 10.0.1.217:69
33 10.0.2.15:30518/i NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
34 192.168.36.12:30518 NodePort 1 => 10.0.1.217:69
35 192.168.36.12:30518/i NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
36 0.0.0.0:30518 NodePort 1 => 10.0.1.217:69
37 0.0.0.0:30518/i NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
38 10.96.104.196:10080 ClusterIP 1 => 10.0.1.200:80
39 10.96.104.196:10069 ClusterIP 1 => 10.0.1.200:69
40 10.0.2.15:31796 NodePort 1 => 10.0.1.200:80
41 10.0.2.15:31796/i NodePort 1 => 10.0.1.200:80
42 192.168.36.12:31796 NodePort 1 => 10.0.1.200:80
43 192.168.36.12:31796/i NodePort 1 => 10.0.1.200:80
44 0.0.0.0:31796 NodePort 1 => 10.0.1.200:80
45 0.0.0.0:31796/i NodePort 1 => 10.0.1.200:80
46 10.0.2.15:30980 NodePort 1 => 10.0.1.200:69
47 10.0.2.15:30980/i NodePort 1 => 10.0.1.200:69
48 192.168.36.12:30980 NodePort 1 => 10.0.1.200:69
49 192.168.36.12:30980/i NodePort 1 => 10.0.1.200:69
50 0.0.0.0:30980 NodePort 1 => 10.0.1.200:69
51 0.0.0.0:30980/i NodePort 1 => 10.0.1.200:69
52 10.111.246.239:10080 ClusterIP 1 => 10.0.1.200:80
53 10.111.246.239:10069 ClusterIP 1 => 10.0.1.200:69
54 10.0.2.15:31961 NodePort 1 => 10.0.1.200:69
55 192.168.36.12:31961 NodePort 1 => 10.0.1.200:69
56 0.0.0.0:31961 NodePort 1 => 10.0.1.200:69
57 192.168.36.12:32027 NodePort 1 => 10.0.1.200:80
58 0.0.0.0:32027 NodePort 1 => 10.0.1.200:80
59 10.0.2.15:32027 NodePort 1 => 10.0.1.200:80
60 10.107.194.202:80 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
61 10.0.2.15:31141 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
62 192.168.36.12:31141 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
63 0.0.0.0:31141 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
64 10.104.23.28:80 ClusterIP 1 => 10.0.1.200:80
65 0.0.0.0:30506 NodePort 1 => 10.0.1.200:80
66 0.0.0.0:30506/i NodePort 1 => 10.0.1.200:80
67 10.0.2.15:30506 NodePort 1 => 10.0.1.200:80
68 10.0.2.15:30506/i NodePort 1 => 10.0.1.200:80
69 192.168.36.12:30506 NodePort 1 => 10.0.1.200:80
70 192.168.36.12:30506/i NodePort 1 => 10.0.1.200:80
71 10.105.189.56:20080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
72 10.105.189.56:20069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
73 192.0.2.233:20069 ExternalIPs 1 => 10.0.0.16:69
2 => 10.0.1.217:69
74 192.0.2.233:20080 ExternalIPs 1 => 10.0.0.16:80
2 => 10.0.1.217:80
75 192.168.36.12:32310 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
76 0.0.0.0:32310 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
77 10.0.2.15:32310 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
78 10.0.2.15:32384 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
79 192.168.36.12:32384 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
80 0.0.0.0:32384 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
81 192.168.36.12:8080 HostPort 1 => 10.0.1.200:80
82 10.0.2.15:8080 HostPort 1 => 10.0.1.200:80
83 0.0.0.0:8080 HostPort 1 => 10.0.1.200:80
84 10.0.2.15:6969 HostPort 1 => 10.0.1.200:69
85 192.168.36.12:6969 HostPort 1 => 10.0.1.200:69
86 0.0.0.0:6969 HostPort 1 => 10.0.1.200:69
87 192.168.36.11:20069 ExternalIPs 1 => 10.0.0.16:69
2 => 10.0.1.217:69
88 192.168.36.11:20080 ExternalIPs 1 => 10.0.0.16:80
2 => 10.0.1.217:80
89 192.168.1.144:80 LoadBalancer 1 => 10.0.0.16:80
2 => 10.0.1.217:80
90 192.168.1.145:80 LoadBalancer 1 => 10.0.1.200:80
91 192.168.1.145:80/i LoadBalancer 1 => 10.0.1.200:80
92 10.98.169.253:80 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
93 192.168.36.12:30116 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
94 0.0.0.0:30116 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
95 10.0.2.15:30116 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
96 192.168.1.146:80 LoadBalancer 1 => 10.0.0.16:80
2 => 10.0.1.217:80
Stderr:
cmd: kubectl exec -n kube-system cilium-klt49 -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
37 Disabled Disabled 11568 k8s:io.cilium.k8s.policy.cluster=default fd02::190 10.0.1.217 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
131 Disabled Disabled 53370 k8s:io.cilium.k8s.policy.cluster=default fd02::10f 10.0.1.200 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
512 Disabled Disabled 4 reserved:health fd02::1f1 10.0.1.237 ready
1076 Disabled Disabled 5104 k8s:io.cilium.k8s.policy.cluster=default fd02::104 10.0.1.250 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
2742 Disabled Disabled 51586 k8s:io.cilium.k8s.policy.cluster=default fd02::1cf 10.0.1.22 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
3752 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 regenerating
reserved:host
Stderr:
cmd: kubectl exec -n kube-system cilium-x6r2p -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.1.250:53
3 10.96.0.10:9153 ClusterIP 1 => 10.0.1.250:9153
4 10.104.230.94:3000 ClusterIP 1 => 10.0.0.17:3000
5 10.111.177.35:9090 ClusterIP 1 => 10.0.0.182:9090
6 10.98.2.183:80 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
7 10.98.2.183:69 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
8 10.102.69.32:10080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
9 10.102.69.32:10069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
10 10.0.2.15:31716 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
11 192.168.36.11:31716 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
12 0.0.0.0:31716 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
13 192.168.36.11:31992 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
14 0.0.0.0:31992 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
15 10.0.2.15:31992 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
16 10.103.131.207:10080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
17 10.103.131.207:10069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
18 10.0.2.15:32701 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
19 192.168.36.11:32701 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
20 0.0.0.0:32701 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
21 192.168.36.11:32286 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
22 0.0.0.0:32286 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
23 10.0.2.15:32286 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
24 10.107.100.205:10080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
25 10.107.100.205:10069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
26 10.0.2.15:30518 NodePort 1 => 10.0.0.16:69
27 10.0.2.15:30518/i NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
28 192.168.36.11:30518 NodePort 1 => 10.0.0.16:69
29 192.168.36.11:30518/i NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
30 0.0.0.0:30518 NodePort 1 => 10.0.0.16:69
31 0.0.0.0:30518/i NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
32 10.0.2.15:31071 NodePort 1 => 10.0.0.16:80
33 10.0.2.15:31071/i NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
34 192.168.36.11:31071 NodePort 1 => 10.0.0.16:80
35 192.168.36.11:31071/i NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
36 0.0.0.0:31071 NodePort 1 => 10.0.0.16:80
37 0.0.0.0:31071/i NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
38 10.96.104.196:10080 ClusterIP 1 => 10.0.1.200:80
39 10.96.104.196:10069 ClusterIP 1 => 10.0.1.200:69
40 10.0.2.15:31796 NodePort
41 10.0.2.15:31796/i NodePort 1 => 10.0.1.200:80
42 192.168.36.11:31796 NodePort
43 192.168.36.11:31796/i NodePort 1 => 10.0.1.200:80
44 0.0.0.0:31796 NodePort
45 0.0.0.0:31796/i NodePort 1 => 10.0.1.200:80
46 10.0.2.15:30980 NodePort
47 10.0.2.15:30980/i NodePort 1 => 10.0.1.200:69
48 192.168.36.11:30980 NodePort
49 192.168.36.11:30980/i NodePort 1 => 10.0.1.200:69
50 0.0.0.0:30980 NodePort
51 0.0.0.0:30980/i NodePort 1 => 10.0.1.200:69
52 10.111.246.239:10080 ClusterIP 1 => 10.0.1.200:80
53 10.111.246.239:10069 ClusterIP 1 => 10.0.1.200:69
54 0.0.0.0:32027 NodePort 1 => 10.0.1.200:80
55 10.0.2.15:32027 NodePort 1 => 10.0.1.200:80
56 192.168.36.11:32027 NodePort 1 => 10.0.1.200:80
57 10.0.2.15:31961 NodePort 1 => 10.0.1.200:69
58 192.168.36.11:31961 NodePort 1 => 10.0.1.200:69
59 0.0.0.0:31961 NodePort 1 => 10.0.1.200:69
60 10.107.194.202:80 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
61 0.0.0.0:31141 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
62 10.0.2.15:31141 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
63 192.168.36.11:31141 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
64 10.104.23.28:80 ClusterIP 1 => 10.0.1.200:80
65 0.0.0.0:30506 NodePort
66 0.0.0.0:30506/i NodePort 1 => 10.0.1.200:80
67 10.0.2.15:30506 NodePort
68 10.0.2.15:30506/i NodePort 1 => 10.0.1.200:80
69 192.168.36.11:30506 NodePort
70 192.168.36.11:30506/i NodePort 1 => 10.0.1.200:80
71 10.105.189.56:20080 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
72 10.105.189.56:20069 ClusterIP 1 => 10.0.0.16:69
2 => 10.0.1.217:69
73 192.0.2.233:20080 ExternalIPs 1 => 10.0.0.16:80
2 => 10.0.1.217:80
74 192.0.2.233:20069 ExternalIPs 1 => 10.0.0.16:69
2 => 10.0.1.217:69
75 10.0.2.15:32310 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
76 192.168.36.11:32310 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
77 0.0.0.0:32310 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
78 192.168.36.11:32384 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
79 10.0.2.15:32384 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
80 0.0.0.0:32384 NodePort 1 => 10.0.0.16:69
2 => 10.0.1.217:69
81 192.168.36.11:20080 ExternalIPs 1 => 10.0.0.16:80
2 => 10.0.1.217:80
82 192.168.36.11:20069 ExternalIPs 1 => 10.0.0.16:69
2 => 10.0.1.217:69
83 192.168.1.144:80 LoadBalancer 1 => 10.0.0.16:80
2 => 10.0.1.217:80
84 192.168.1.145:80 LoadBalancer
85 192.168.1.145:80/i LoadBalancer 1 => 10.0.1.200:80
86 10.98.169.253:80 ClusterIP 1 => 10.0.0.16:80
2 => 10.0.1.217:80
87 10.0.2.15:30116 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
88 192.168.36.11:30116 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
89 0.0.0.0:30116 NodePort 1 => 10.0.0.16:80
2 => 10.0.1.217:80
90 192.168.1.146:80 LoadBalancer 1 => 10.0.0.16:80
2 => 10.0.1.217:80
Stderr:
cmd: kubectl exec -n kube-system cilium-x6r2p -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
33 Disabled Disabled 4 reserved:health fd02::b 10.0.0.165 ready
1220 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
reserved:host
1278 Disabled Disabled 11568 k8s:io.cilium.k8s.policy.cluster=default fd02::d2 10.0.0.16 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
1989 Disabled Disabled 10427 k8s:app=prometheus fd02::44 10.0.0.182 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
2116 Disabled Disabled 1355 k8s:app=grafana fd02::f0 10.0.0.17 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
3443 Disabled Disabled 51586 k8s:io.cilium.k8s.policy.cluster=default fd02::a6 10.0.0.156 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
Stderr:
===================== Exiting AfterFailed =====================
09:24:50 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
09:24:50 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|0c89c839_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_LoadBalancer_Connectivity_to_endpoint_via_LB.zip]]
09:24:55 STEP: Running AfterAll block for EntireTestsuite K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with direct routing Tests LoadBalancer
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1462/artifact/0c89c839_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_LoadBalancer_Connectivity_to_endpoint_via_LB.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1462/artifact/test_results_Cilium-PR-K8s-1.16-net-next_1462_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1462/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!