-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sServicesTest Checks service across nodes Tests NodePort BPF Tests L2-less with Wireguard provisioned via kube-wireguarder Tests NodePort BPF
Failure Output
FAIL: Can not connect to service "http://192.168.36.11:32472" from outside cluster (1/10)
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Can not connect to service "http://192.168.36.11:32472" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-wcbcr -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.11:32472 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000018()', Connect: '0.000000',Transfer '0.000000', total '5.000746'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:299
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-ddznm cilium-ktc7d]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
coredns-5495c8f48d-kdm9t
grafana-7fd557d749-7ztrw
prometheus-d87f8f984-5jndq
test-k8s2-5b756fd6c5-dkhpj
testclient-bbs9c
testclient-nshhs
testds-28n68
testds-6tbpm
Cilium agent 'cilium-ddznm': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0
Cilium agent 'cilium-ktc7d': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 40 Failed 0
Standard Error
Click to show.
18:38:06 STEP: Running BeforeAll block for EntireTestsuite K8sServicesTest Checks service across nodes Tests NodePort BPF Tests L2-less with Wireguard provisioned via kube-wireguarder
18:38:06 STEP: WaitforPods(namespace="kube-system", filter="-l app=kube-wireguarder")
18:38:15 STEP: WaitforPods(namespace="kube-system", filter="-l app=kube-wireguarder") => <nil>
18:38:15 STEP: SNAT with direct routing device wg0
18:38:15 STEP: Installing Cilium
18:38:16 STEP: Waiting for Cilium to become ready
18:38:53 STEP: Validating if Kubernetes DNS is deployed
18:38:53 STEP: Checking if deployment is ready
18:38:53 STEP: Checking if pods have identity
18:38:53 STEP: Checking if kube-dns service is plumbed correctly
18:38:53 STEP: Checking if DNS can resolve
18:38:56 STEP: Kubernetes DNS is up and operational
18:38:56 STEP: Validating Cilium Installation
18:38:56 STEP: Performing Cilium status preflight check
18:38:56 STEP: Performing Cilium controllers preflight check
18:38:56 STEP: Performing Cilium health check
18:39:00 STEP: Performing Cilium service preflight check
18:39:00 STEP: Performing K8s service preflight check
18:39:00 STEP: Waiting for cilium-operator to be ready
18:39:00 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:39:00 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:39:00 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.36.11:32472"
18:39:03 STEP: Making 10 HTTP requests from outside cluster to "http://172.16.42.1:32472"
18:39:06 STEP: Patching configmap kube-wireguarder-config in namespace kube-system
18:39:08 STEP: SNAT with direct routing device private
18:39:08 STEP: Installing Cilium
18:39:09 STEP: Waiting for Cilium to become ready
18:39:43 STEP: Validating if Kubernetes DNS is deployed
18:39:43 STEP: Checking if deployment is ready
18:39:43 STEP: Checking if pods have identity
18:39:43 STEP: Checking if kube-dns service is plumbed correctly
18:39:43 STEP: Checking if DNS can resolve
18:39:44 STEP: Kubernetes DNS is up and operational
18:39:44 STEP: Validating Cilium Installation
18:39:44 STEP: Performing Cilium controllers preflight check
18:39:44 STEP: Performing Cilium health check
18:39:44 STEP: Performing Cilium status preflight check
18:39:47 STEP: Performing Cilium service preflight check
18:39:47 STEP: Performing K8s service preflight check
18:39:47 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-ddznm': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
18:39:47 STEP: Performing Cilium controllers preflight check
18:39:47 STEP: Performing Cilium health check
18:39:47 STEP: Performing Cilium status preflight check
18:39:49 STEP: Performing Cilium service preflight check
18:39:49 STEP: Performing K8s service preflight check
18:39:49 STEP: Performing Cilium controllers preflight check
18:39:49 STEP: Performing Cilium status preflight check
18:39:49 STEP: Performing Cilium health check
18:39:51 STEP: Performing Cilium service preflight check
18:39:51 STEP: Performing K8s service preflight check
18:39:51 STEP: Waiting for cilium-operator to be ready
18:39:51 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:39:51 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:39:51 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.36.11:32472"
FAIL: Can not connect to service "http://192.168.36.11:32472" from outside cluster (1/10)
Expected command: kubectl exec -n kube-system log-gatherer-wcbcr -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.11:32472 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000018()', Connect: '0.000000',Transfer '0.000000', total '5.000746'
Stderr:
command terminated with exit code 28
=== Test Finished at 2021-08-13T18:39:57Z====
18:39:57 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
18:39:57 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-7fd557d749-7ztrw 1/1 Running 0 122m 10.0.0.152 k8s1 <none> <none>
cilium-monitoring prometheus-d87f8f984-5jndq 1/1 Running 0 122m 10.0.0.211 k8s1 <none> <none>
default test-k8s2-5b756fd6c5-dkhpj 2/2 Running 0 23m 10.0.1.197 k8s2 <none> <none>
default testclient-bbs9c 1/1 Running 0 23m 10.0.1.61 k8s2 <none> <none>
default testclient-nshhs 1/1 Running 0 23m 10.0.0.182 k8s1 <none> <none>
default testds-28n68 2/2 Running 0 12m 10.0.0.99 k8s1 <none> <none>
default testds-6tbpm 2/2 Running 0 12m 10.0.1.46 k8s2 <none> <none>
kube-system cilium-ddznm 1/1 Running 0 52s 192.168.36.12 k8s2 <none> <none>
kube-system cilium-ktc7d 1/1 Running 0 52s 192.168.36.11 k8s1 <none> <none>
kube-system cilium-operator-5b576dccd6-lstq9 1/1 Running 0 52s 192.168.36.13 k8s3 <none> <none>
kube-system cilium-operator-5b576dccd6-mj7wm 1/1 Running 0 52s 192.168.36.12 k8s2 <none> <none>
kube-system coredns-5495c8f48d-kdm9t 1/1 Running 0 10m 10.0.1.88 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 126m 192.168.36.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 125m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 126m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 125m 192.168.36.11 k8s1 <none> <none>
kube-system kube-wireguarder-9z24w 1/1 Running 0 54s 192.168.36.11 k8s1 <none> <none>
kube-system kube-wireguarder-bzzvb 1/1 Running 0 53s 192.168.36.13 k8s3 <none> <none>
kube-system kube-wireguarder-lrqkq 1/1 Running 0 53s 192.168.36.12 k8s2 <none> <none>
kube-system log-gatherer-lfb7c 1/1 Running 0 122m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-wcbcr 1/1 Running 0 122m 192.168.36.13 k8s3 <none> <none>
kube-system log-gatherer-x6txs 1/1 Running 0 122m 192.168.36.12 k8s2 <none> <none>
kube-system registry-adder-68ckd 1/1 Running 0 123m 192.168.36.13 k8s3 <none> <none>
kube-system registry-adder-f85wt 1/1 Running 0 123m 192.168.36.12 k8s2 <none> <none>
kube-system registry-adder-n9pp4 1/1 Running 0 123m 192.168.36.11 k8s1 <none> <none>
Stderr:
Fetching command output from pods [cilium-ddznm cilium-ktc7d]
cmd: kubectl exec -n kube-system cilium-ddznm -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.1.88:53
3 10.96.0.10:9153 ClusterIP 1 => 10.0.1.88:9153
4 10.106.95.185:3000 ClusterIP 1 => 10.0.0.152:3000
5 10.101.186.255:9090 ClusterIP 1 => 10.0.0.211:9090
6 10.111.206.32:80 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
7 10.111.206.32:69 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
8 10.111.91.85:10069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
9 10.111.91.85:10080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
11 192.168.36.12:30665 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
12 0.0.0.0:30665 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
13 192.168.36.12:32472 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
14 0.0.0.0:32472 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
16 10.109.227.181:10080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
17 10.109.227.181:10069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
19 192.168.36.12:32644 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
20 0.0.0.0:32644 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
21 192.168.36.12:32698 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
22 0.0.0.0:32698 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
24 10.109.25.205:10080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
25 10.109.25.205:10069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
28 192.168.36.12:30219 NodePort 1 => 10.0.1.46:80
29 192.168.36.12:30219/i NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
30 0.0.0.0:30219 NodePort 1 => 10.0.1.46:80
31 0.0.0.0:30219/i NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
34 192.168.36.12:30334 NodePort 1 => 10.0.1.46:69
35 192.168.36.12:30334/i NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
36 0.0.0.0:30334 NodePort 1 => 10.0.1.46:69
37 0.0.0.0:30334/i NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
38 10.100.155.93:10069 ClusterIP 1 => 10.0.1.197:69
39 10.100.155.93:10080 ClusterIP 1 => 10.0.1.197:80
42 192.168.36.12:30615 NodePort 1 => 10.0.1.197:80
43 192.168.36.12:30615/i NodePort 1 => 10.0.1.197:80
44 0.0.0.0:30615 NodePort 1 => 10.0.1.197:80
45 0.0.0.0:30615/i NodePort 1 => 10.0.1.197:80
48 192.168.36.12:30843 NodePort 1 => 10.0.1.197:69
49 192.168.36.12:30843/i NodePort 1 => 10.0.1.197:69
50 0.0.0.0:30843 NodePort 1 => 10.0.1.197:69
51 0.0.0.0:30843/i NodePort 1 => 10.0.1.197:69
52 10.100.251.210:10080 ClusterIP 1 => 10.0.1.197:80
53 10.100.251.210:10069 ClusterIP 1 => 10.0.1.197:69
55 192.168.36.12:32104 NodePort 1 => 10.0.1.197:80
56 0.0.0.0:32104 NodePort 1 => 10.0.1.197:80
58 192.168.36.12:32088 NodePort 1 => 10.0.1.197:69
59 0.0.0.0:32088 NodePort 1 => 10.0.1.197:69
60 10.103.5.254:80 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
62 192.168.36.12:31563 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
63 0.0.0.0:31563 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
64 10.109.4.46:80 ClusterIP 1 => 10.0.1.197:80
65 192.168.36.12:31024 NodePort 1 => 10.0.1.197:80
66 192.168.36.12:31024/i NodePort 1 => 10.0.1.197:80
67 0.0.0.0:31024 NodePort 1 => 10.0.1.197:80
68 0.0.0.0:31024/i NodePort 1 => 10.0.1.197:80
71 10.103.213.65:20080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
72 10.103.213.65:20069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
73 192.0.2.233:20080 ExternalIPs 1 => 10.0.1.46:80
2 => 10.0.0.99:80
74 192.0.2.233:20069 ExternalIPs 1 => 10.0.1.46:69
2 => 10.0.0.99:69
75 192.168.36.12:31806 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
77 0.0.0.0:31806 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
79 192.168.36.12:31229 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
80 0.0.0.0:31229 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
82 192.168.36.12:8080 HostPort 1 => 10.0.1.197:80
83 0.0.0.0:8080 HostPort 1 => 10.0.1.197:80
85 192.168.36.12:6969 HostPort 1 => 10.0.1.197:69
86 0.0.0.0:6969 HostPort 1 => 10.0.1.197:69
87 192.168.36.11:20080 ExternalIPs 1 => 10.0.1.46:80
2 => 10.0.0.99:80
88 192.168.36.11:20069 ExternalIPs 1 => 10.0.1.46:69
2 => 10.0.0.99:69
89 192.168.1.144:80 LoadBalancer 1 => 10.0.1.197:80
90 192.168.1.144:80/i LoadBalancer 1 => 10.0.1.197:80
91 192.168.1.145:80 LoadBalancer 1 => 10.0.1.46:80
2 => 10.0.0.99:80
92 172.16.42.2:32644 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
93 172.16.42.2:32698 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
94 172.16.42.2:30615 NodePort 1 => 10.0.1.197:80
95 172.16.42.2:30615/i NodePort 1 => 10.0.1.197:80
96 172.16.42.2:30843 NodePort 1 => 10.0.1.197:69
97 172.16.42.2:30843/i NodePort 1 => 10.0.1.197:69
98 172.16.42.2:8080 HostPort 1 => 10.0.1.197:80
99 172.16.42.2:6969 HostPort 1 => 10.0.1.197:69
100 172.16.42.2:30219 NodePort 1 => 10.0.1.46:80
101 172.16.42.2:30219/i NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
102 172.16.42.2:30334 NodePort 1 => 10.0.1.46:69
103 172.16.42.2:30334/i NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
104 172.16.42.2:32104 NodePort 1 => 10.0.1.197:80
105 172.16.42.2:32088 NodePort 1 => 10.0.1.197:69
106 172.16.42.2:30665 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
107 172.16.42.2:32472 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
108 172.16.42.2:31024 NodePort 1 => 10.0.1.197:80
109 172.16.42.2:31024/i NodePort 1 => 10.0.1.197:80
110 172.16.42.2:31229 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
111 172.16.42.2:31806 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
112 172.16.42.2:31563 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
Stderr:
cmd: kubectl exec -n kube-system cilium-ddznm -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
446 Disabled Disabled 11435 k8s:io.cilium.k8s.policy.cluster=default fd02::1a8 10.0.1.61 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
646 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
794 Disabled Disabled 25479 k8s:io.cilium.k8s.policy.cluster=default fd02::153 10.0.1.46 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
925 Disabled Disabled 4 reserved:health fd02::1f4 10.0.1.29 ready
1069 Disabled Disabled 14151 k8s:io.cilium.k8s.policy.cluster=default fd02::1f2 10.0.1.88 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
2645 Disabled Disabled 43605 k8s:io.cilium.k8s.policy.cluster=default fd02::19a 10.0.1.197 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
Stderr:
cmd: kubectl exec -n kube-system cilium-ktc7d -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.106.95.185:3000 ClusterIP 1 => 10.0.0.152:3000
2 10.101.186.255:9090 ClusterIP 1 => 10.0.0.211:9090
3 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
4 10.96.0.10:53 ClusterIP 1 => 10.0.1.88:53
5 10.96.0.10:9153 ClusterIP 1 => 10.0.1.88:9153
6 10.111.206.32:69 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
7 10.111.206.32:80 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
8 10.111.91.85:10080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
9 10.111.91.85:10069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
11 192.168.36.11:32472 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
12 0.0.0.0:32472 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
13 192.168.36.11:30665 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
15 0.0.0.0:30665 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
16 10.109.227.181:10069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
17 10.109.227.181:10080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
19 0.0.0.0:32698 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
20 192.168.36.11:32698 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
22 192.168.36.11:32644 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
23 0.0.0.0:32644 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
24 10.109.25.205:10069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
25 10.109.25.205:10080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
26 0.0.0.0:30219 NodePort 1 => 10.0.0.99:80
27 0.0.0.0:30219/i NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
30 192.168.36.11:30219 NodePort 1 => 10.0.0.99:80
31 192.168.36.11:30219/i NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
34 192.168.36.11:30334 NodePort 1 => 10.0.0.99:69
35 192.168.36.11:30334/i NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
36 0.0.0.0:30334 NodePort 1 => 10.0.0.99:69
37 0.0.0.0:30334/i NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
38 10.100.155.93:10080 ClusterIP 1 => 10.0.1.197:80
39 10.100.155.93:10069 ClusterIP 1 => 10.0.1.197:69
40 192.168.36.11:30615 NodePort
41 192.168.36.11:30615/i NodePort 1 => 10.0.1.197:80
42 0.0.0.0:30615 NodePort
43 0.0.0.0:30615/i NodePort 1 => 10.0.1.197:80
46 192.168.36.11:30843 NodePort
47 192.168.36.11:30843/i NodePort 1 => 10.0.1.197:69
48 0.0.0.0:30843 NodePort
49 0.0.0.0:30843/i NodePort 1 => 10.0.1.197:69
52 10.100.251.210:10080 ClusterIP 1 => 10.0.1.197:80
53 10.100.251.210:10069 ClusterIP 1 => 10.0.1.197:69
55 192.168.36.11:32104 NodePort 1 => 10.0.1.197:80
56 0.0.0.0:32104 NodePort 1 => 10.0.1.197:80
57 192.168.36.11:32088 NodePort 1 => 10.0.1.197:69
59 0.0.0.0:32088 NodePort 1 => 10.0.1.197:69
60 10.103.5.254:80 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
62 192.168.36.11:31563 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
63 0.0.0.0:31563 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
64 10.109.4.46:80 ClusterIP 1 => 10.0.1.197:80
65 192.168.36.11:31024 NodePort
66 192.168.36.11:31024/i NodePort 1 => 10.0.1.197:80
67 0.0.0.0:31024 NodePort
68 0.0.0.0:31024/i NodePort 1 => 10.0.1.197:80
71 10.103.213.65:20080 ClusterIP 1 => 10.0.1.46:80
2 => 10.0.0.99:80
72 10.103.213.65:20069 ClusterIP 1 => 10.0.1.46:69
2 => 10.0.0.99:69
73 192.0.2.233:20080 ExternalIPs 1 => 10.0.1.46:80
2 => 10.0.0.99:80
74 192.0.2.233:20069 ExternalIPs 1 => 10.0.1.46:69
2 => 10.0.0.99:69
76 192.168.36.11:31806 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
77 0.0.0.0:31806 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
79 192.168.36.11:31229 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
80 0.0.0.0:31229 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
81 192.168.36.11:20080 ExternalIPs 1 => 10.0.1.46:80
2 => 10.0.0.99:80
82 192.168.36.11:20069 ExternalIPs 1 => 10.0.1.46:69
2 => 10.0.0.99:69
83 192.168.1.145:80 LoadBalancer 1 => 10.0.1.46:80
2 => 10.0.0.99:80
84 192.168.1.144:80 LoadBalancer
85 192.168.1.144:80/i LoadBalancer 1 => 10.0.1.197:80
86 172.16.42.1:32644 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
87 172.16.42.1:32698 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
88 172.16.42.1:30615 NodePort
89 172.16.42.1:30615/i NodePort 1 => 10.0.1.197:80
90 172.16.42.1:30843 NodePort
91 172.16.42.1:30843/i NodePort 1 => 10.0.1.197:69
92 172.16.42.1:32104 NodePort 1 => 10.0.1.197:80
93 172.16.42.1:32088 NodePort 1 => 10.0.1.197:69
94 172.16.42.1:30219 NodePort 1 => 10.0.0.99:80
95 172.16.42.1:30219/i NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
96 172.16.42.1:30334 NodePort 1 => 10.0.0.99:69
97 172.16.42.1:30334/i NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
98 172.16.42.1:32472 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
99 172.16.42.1:30665 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
100 172.16.42.1:31024 NodePort
101 172.16.42.1:31024/i NodePort 1 => 10.0.1.197:80
102 172.16.42.1:31806 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
103 172.16.42.1:31229 NodePort 1 => 10.0.1.46:69
2 => 10.0.0.99:69
104 172.16.42.1:31563 NodePort 1 => 10.0.1.46:80
2 => 10.0.0.99:80
Stderr:
cmd: kubectl exec -n kube-system cilium-ktc7d -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
392 Disabled Disabled 12508 k8s:app=prometheus fd02::33 10.0.0.211 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
586 Disabled Disabled 1300 k8s:app=grafana fd02::ee 10.0.0.152 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
1023 Disabled Disabled 11435 k8s:io.cilium.k8s.policy.cluster=default fd02::36 10.0.0.182 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
3583 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 regenerating
k8s:node-role.kubernetes.io/master
reserved:host
3797 Disabled Disabled 4 reserved:health fd02::78 10.0.0.188 ready
3995 Disabled Disabled 25479 k8s:io.cilium.k8s.policy.cluster=default fd02::65 10.0.0.99 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
Stderr:
===================== Exiting AfterFailed =====================
18:40:27 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
18:40:28 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|a712a7ac_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_L2-less_with_Wireguard_provisioned_via_kube-wireguarder_Tests_NodePort_BPF.zip]]
18:40:32 STEP: Running AfterAll block for EntireTestsuite K8sServicesTest Checks service across nodes Tests NodePort BPF Tests L2-less with Wireguard provisioned via kube-wireguarder
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1261/artifact/a712a7ac_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_L2-less_with_Wireguard_provisioned_via_kube-wireguarder_Tests_NodePort_BPF.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1261/artifact/test_results_Cilium-PR-K8s-1.16-net-next_1261_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1261/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.Impacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!