-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sServicesTest Checks service across nodes Tests NodePort BPF Tests with vxlan Test NodePort with netfilterCompatMode=true
Failure Output
FAIL: Can not connect to service "http://192.168.36.11:32621" from outside cluster (2/10)
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Can not connect to service "http://192.168.36.11:32621" from outside cluster (2/10)
Expected command: kubectl exec -n kube-system log-gatherer-667kl -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.11:32621 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000031()', Connect: '0.000000',Transfer '0.000000', total '5.001145'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/service_helpers.go:299
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-bpxdd cilium-h952n]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
testds-vqxgj
coredns-5495c8f48d-rhv9t
test-k8s2-5b756fd6c5-g59jp
testclient-j8hx5
testclient-ph722
testds-q9snb
Cilium agent 'cilium-bpxdd': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 41 Failed 0
Cilium agent 'cilium-h952n': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 31 Failed 0
Standard Error
Click to show.
03:25:26 STEP: Installing Cilium
03:25:28 STEP: Waiting for Cilium to become ready
03:26:15 STEP: Validating if Kubernetes DNS is deployed
03:26:15 STEP: Checking if deployment is ready
03:26:15 STEP: Checking if kube-dns service is plumbed correctly
03:26:15 STEP: Checking if pods have identity
03:26:15 STEP: Checking if DNS can resolve
03:26:16 STEP: Kubernetes DNS is up and operational
03:26:16 STEP: Validating Cilium Installation
03:26:16 STEP: Performing Cilium controllers preflight check
03:26:16 STEP: Performing Cilium status preflight check
03:26:16 STEP: Performing Cilium health check
03:26:17 STEP: Performing Cilium service preflight check
03:26:17 STEP: Performing K8s service preflight check
03:26:20 STEP: Waiting for cilium-operator to be ready
03:26:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
03:26:20 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:127.0.0.1]:32621"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:127.0.0.1]:31530/hello"
03:26:20 STEP: Making 10 HTTP requests from outside cluster to "tftp://192.168.36.12:31530/hello"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.36.12:32621"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://10.105.175.32:10080"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.36.11:32621"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.36.12:31530/hello"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://10.105.175.32:10069/hello"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://127.0.0.1:32621"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.36.11:31530/hello"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.36.12]:32621"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://127.0.0.1:31530/hello"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.36.12]:31530/hello"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[::ffff:192.168.36.11]:32621"
03:26:20 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[::ffff:192.168.36.11]:31530/hello"
03:26:20 STEP: Making 10 HTTP requests from outside cluster to "tftp://192.168.36.11:31530/hello"
03:26:20 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.36.12:32621"
03:26:20 STEP: Making 10 HTTP requests from outside cluster to "http://192.168.36.11:32621"
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service tftp://192.168.36.12:31530/hello
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service http://192.168.36.11:32621
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service tftp://10.105.175.32:10069/hello
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service tftp://192.168.36.11:31530/hello
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service http://[::ffff:192.168.36.11]:32621
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service http://[::ffff:192.168.36.12]:32621
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service tftp://[::ffff:192.168.36.11]:31530/hello
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service tftp://[::ffff:192.168.36.12]:31530/hello
03:26:20 STEP: Making 10 curl requests from testclient-j8hx5 pod to service http://192.168.36.12:32621
03:26:21 STEP: Making 10 curl requests from testclient-j8hx5 pod to service http://10.105.175.32:10080
03:26:21 STEP: Making 10 curl requests from testclient-ph722 pod to service tftp://192.168.36.12:31530/hello
03:26:22 STEP: Making 10 curl requests from testclient-ph722 pod to service tftp://[::ffff:192.168.36.12]:31530/hello
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service http://[::ffff:192.168.36.12]:32621
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service http://10.105.175.32:10080
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service tftp://192.168.36.11:31530/hello
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service tftp://[::ffff:192.168.36.11]:31530/hello
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service http://192.168.36.12:32621
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service http://[::ffff:192.168.36.11]:32621
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service tftp://10.105.175.32:10069/hello
03:26:24 STEP: Making 10 curl requests from testclient-ph722 pod to service http://192.168.36.11:32621
FAIL: Can not connect to service "http://192.168.36.11:32621" from outside cluster (2/10)
Expected command: kubectl exec -n kube-system log-gatherer-667kl -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://192.168.36.11:32621 -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000031()', Connect: '0.000000',Transfer '0.000000', total '5.001145'
Stderr:
command terminated with exit code 28
=== Test Finished at 2021-08-05T03:26:43Z====
03:26:43 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
03:26:43 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-7fd557d749-mfpz6 0/1 Running 0 120m 10.0.1.216 k8s2 <none> <none>
cilium-monitoring prometheus-d87f8f984-6v7jr 1/1 Running 0 120m 10.0.1.8 k8s2 <none> <none>
default test-k8s2-5b756fd6c5-g59jp 2/2 Running 0 8m29s 10.0.1.70 k8s2 <none> <none>
default testclient-j8hx5 1/1 Running 0 8m29s 10.0.0.3 k8s1 <none> <none>
default testclient-ph722 1/1 Running 0 8m29s 10.0.1.30 k8s2 <none> <none>
default testds-q9snb 2/2 Running 0 6m54s 10.0.0.121 k8s1 <none> <none>
default testds-vqxgj 2/2 Running 0 8m29s 10.0.1.149 k8s2 <none> <none>
kube-system cilium-bpxdd 1/1 Running 0 79s 192.168.36.12 k8s2 <none> <none>
kube-system cilium-h952n 1/1 Running 0 79s 192.168.36.11 k8s1 <none> <none>
kube-system cilium-operator-867dc756df-2xzg4 1/1 Running 0 79s 192.168.36.13 k8s3 <none> <none>
kube-system cilium-operator-867dc756df-nb4xk 1/1 Running 0 79s 192.168.36.12 k8s2 <none> <none>
kube-system coredns-5495c8f48d-rhv9t 1/1 Running 0 59m 10.0.1.214 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 123m 192.168.36.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 123m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 123m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 123m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-658n2 1/1 Running 0 120m 192.168.36.12 k8s2 <none> <none>
kube-system log-gatherer-667kl 1/1 Running 0 120m 192.168.36.13 k8s3 <none> <none>
kube-system log-gatherer-nbngp 1/1 Running 0 120m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-jmxrc 1/1 Running 0 121m 192.168.36.12 k8s2 <none> <none>
kube-system registry-adder-mzk9p 1/1 Running 0 121m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-q6trv 1/1 Running 0 121m 192.168.36.13 k8s3 <none> <none>
Stderr:
Fetching command output from pods [cilium-bpxdd cilium-h952n]
cmd: kubectl exec -n kube-system cilium-bpxdd -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.107.221.131:3000 ClusterIP
2 10.97.215.215:9090 ClusterIP 1 => 10.0.1.8:9090
3 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
4 10.96.0.10:53 ClusterIP 1 => 10.0.1.214:53
5 10.96.0.10:9153 ClusterIP 1 => 10.0.1.214:9153
6 10.110.55.201:80 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
7 10.110.55.201:69 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
8 10.105.175.32:10080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
9 10.105.175.32:10069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
10 10.0.2.15:32621 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
11 192.168.36.12:32621 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
12 0.0.0.0:32621 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
13 192.168.36.12:31530 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
14 0.0.0.0:31530 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
15 10.0.2.15:31530 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
16 10.100.22.44:10080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
17 10.100.22.44:10069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
18 10.0.2.15:30036 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
19 192.168.36.12:30036 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
20 0.0.0.0:30036 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
21 10.0.2.15:32134 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
22 192.168.36.12:32134 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
23 0.0.0.0:32134 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
24 10.96.56.155:10080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
25 10.96.56.155:10069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
26 10.0.2.15:31845 NodePort 1 => 10.0.1.149:80
27 10.0.2.15:31845/i NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
28 192.168.36.12:31845 NodePort 1 => 10.0.1.149:80
29 192.168.36.12:31845/i NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
30 0.0.0.0:31845 NodePort 1 => 10.0.1.149:80
31 0.0.0.0:31845/i NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
32 0.0.0.0:30627 NodePort 1 => 10.0.1.149:69
33 0.0.0.0:30627/i NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
34 10.0.2.15:30627 NodePort 1 => 10.0.1.149:69
35 10.0.2.15:30627/i NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
36 192.168.36.12:30627 NodePort 1 => 10.0.1.149:69
37 192.168.36.12:30627/i NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
38 10.108.204.146:10080 ClusterIP 1 => 10.0.1.70:80
39 10.108.204.146:10069 ClusterIP 1 => 10.0.1.70:69
40 10.0.2.15:32501 NodePort 1 => 10.0.1.70:80
41 10.0.2.15:32501/i NodePort 1 => 10.0.1.70:80
42 192.168.36.12:32501 NodePort 1 => 10.0.1.70:80
43 192.168.36.12:32501/i NodePort 1 => 10.0.1.70:80
44 0.0.0.0:32501 NodePort 1 => 10.0.1.70:80
45 0.0.0.0:32501/i NodePort 1 => 10.0.1.70:80
46 10.0.2.15:30987 NodePort 1 => 10.0.1.70:69
47 10.0.2.15:30987/i NodePort 1 => 10.0.1.70:69
48 192.168.36.12:30987 NodePort 1 => 10.0.1.70:69
49 192.168.36.12:30987/i NodePort 1 => 10.0.1.70:69
50 0.0.0.0:30987 NodePort 1 => 10.0.1.70:69
51 0.0.0.0:30987/i NodePort 1 => 10.0.1.70:69
52 10.110.28.179:10080 ClusterIP 1 => 10.0.1.70:80
53 10.110.28.179:10069 ClusterIP 1 => 10.0.1.70:69
54 10.0.2.15:31014 NodePort 1 => 10.0.1.70:80
55 192.168.36.12:31014 NodePort 1 => 10.0.1.70:80
56 0.0.0.0:31014 NodePort 1 => 10.0.1.70:80
57 10.0.2.15:32122 NodePort 1 => 10.0.1.70:69
58 192.168.36.12:32122 NodePort 1 => 10.0.1.70:69
59 0.0.0.0:32122 NodePort 1 => 10.0.1.70:69
60 10.106.96.243:80 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
61 0.0.0.0:32264 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
62 192.168.36.12:32264 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
63 10.0.2.15:32264 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
64 10.104.222.44:80 ClusterIP 1 => 10.0.1.70:80
65 10.0.2.15:31889 NodePort 1 => 10.0.1.70:80
66 10.0.2.15:31889/i NodePort 1 => 10.0.1.70:80
67 192.168.36.12:31889 NodePort 1 => 10.0.1.70:80
68 192.168.36.12:31889/i NodePort 1 => 10.0.1.70:80
69 0.0.0.0:31889 NodePort 1 => 10.0.1.70:80
70 0.0.0.0:31889/i NodePort 1 => 10.0.1.70:80
71 10.107.233.130:20080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
72 10.107.233.130:20069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
73 192.0.2.233:20080 ExternalIPs 1 => 10.0.1.149:80
2 => 10.0.0.121:80
74 192.0.2.233:20069 ExternalIPs 1 => 10.0.1.149:69
2 => 10.0.0.121:69
75 192.168.36.12:30054 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
76 0.0.0.0:30054 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
77 10.0.2.15:30054 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
78 192.168.36.12:31820 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
79 0.0.0.0:31820 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
80 10.0.2.15:31820 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
81 10.0.2.15:8080 HostPort 1 => 10.0.1.70:80
82 192.168.36.12:8080 HostPort 1 => 10.0.1.70:80
83 0.0.0.0:8080 HostPort 1 => 10.0.1.70:80
84 10.0.2.15:6969 HostPort 1 => 10.0.1.70:69
85 192.168.36.12:6969 HostPort 1 => 10.0.1.70:69
86 0.0.0.0:6969 HostPort 1 => 10.0.1.70:69
87 192.168.36.11:20080 ExternalIPs 1 => 10.0.1.149:80
2 => 10.0.0.121:80
88 192.168.36.11:20069 ExternalIPs 1 => 10.0.1.149:69
2 => 10.0.0.121:69
Stderr:
cmd: kubectl exec -n kube-system cilium-bpxdd -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
692 Disabled Disabled 11378 k8s:io.cilium.k8s.policy.cluster=default fd02::11e 10.0.1.149 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
703 Disabled Disabled 5026 k8s:io.cilium.k8s.policy.cluster=default fd02::169 10.0.1.70 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
1644 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
3632 Disabled Disabled 4 reserved:health fd02::162 10.0.1.160 ready
3759 Disabled Disabled 1901 k8s:io.cilium.k8s.policy.cluster=default fd02::190 10.0.1.214 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
3987 Disabled Disabled 21475 k8s:io.cilium.k8s.policy.cluster=default fd02::182 10.0.1.30 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
Stderr:
cmd: kubectl exec -n kube-system cilium-h952n -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:9153 ClusterIP 1 => 10.0.1.214:9153
3 10.96.0.10:53 ClusterIP 1 => 10.0.1.214:53
4 10.107.221.131:3000 ClusterIP
5 10.97.215.215:9090 ClusterIP 1 => 10.0.1.8:9090
6 10.110.55.201:80 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
7 10.110.55.201:69 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
8 10.105.175.32:10080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
9 10.105.175.32:10069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
10 10.0.2.15:32621 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
11 192.168.36.11:32621 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
12 0.0.0.0:32621 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
13 192.168.36.11:31530 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
14 0.0.0.0:31530 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
15 10.0.2.15:31530 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
16 10.100.22.44:10080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
17 10.100.22.44:10069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
18 10.0.2.15:30036 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
19 192.168.36.11:30036 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
20 0.0.0.0:30036 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
21 10.0.2.15:32134 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
22 192.168.36.11:32134 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
23 0.0.0.0:32134 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
24 10.96.56.155:10069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
25 10.96.56.155:10080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
26 10.0.2.15:30627 NodePort 1 => 10.0.0.121:69
27 10.0.2.15:30627/i NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
28 192.168.36.11:30627 NodePort 1 => 10.0.0.121:69
29 192.168.36.11:30627/i NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
30 0.0.0.0:30627 NodePort 1 => 10.0.0.121:69
31 0.0.0.0:30627/i NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
32 10.0.2.15:31845 NodePort 1 => 10.0.0.121:80
33 10.0.2.15:31845/i NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
34 192.168.36.11:31845 NodePort 1 => 10.0.0.121:80
35 192.168.36.11:31845/i NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
36 0.0.0.0:31845 NodePort 1 => 10.0.0.121:80
37 0.0.0.0:31845/i NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
38 10.108.204.146:10080 ClusterIP 1 => 10.0.1.70:80
39 10.108.204.146:10069 ClusterIP 1 => 10.0.1.70:69
40 10.0.2.15:32501 NodePort
41 10.0.2.15:32501/i NodePort 1 => 10.0.1.70:80
42 192.168.36.11:32501 NodePort
43 192.168.36.11:32501/i NodePort 1 => 10.0.1.70:80
44 0.0.0.0:32501 NodePort
45 0.0.0.0:32501/i NodePort 1 => 10.0.1.70:80
46 0.0.0.0:30987 NodePort
47 0.0.0.0:30987/i NodePort 1 => 10.0.1.70:69
48 10.0.2.15:30987 NodePort
49 10.0.2.15:30987/i NodePort 1 => 10.0.1.70:69
50 192.168.36.11:30987 NodePort
51 192.168.36.11:30987/i NodePort 1 => 10.0.1.70:69
52 10.110.28.179:10080 ClusterIP 1 => 10.0.1.70:80
53 10.110.28.179:10069 ClusterIP 1 => 10.0.1.70:69
54 0.0.0.0:31014 NodePort 1 => 10.0.1.70:80
55 10.0.2.15:31014 NodePort 1 => 10.0.1.70:80
56 192.168.36.11:31014 NodePort 1 => 10.0.1.70:80
57 10.0.2.15:32122 NodePort 1 => 10.0.1.70:69
58 192.168.36.11:32122 NodePort 1 => 10.0.1.70:69
59 0.0.0.0:32122 NodePort 1 => 10.0.1.70:69
60 10.106.96.243:80 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
61 10.0.2.15:32264 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
62 192.168.36.11:32264 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
63 0.0.0.0:32264 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
64 10.104.222.44:80 ClusterIP 1 => 10.0.1.70:80
65 10.0.2.15:31889 NodePort
66 10.0.2.15:31889/i NodePort 1 => 10.0.1.70:80
67 192.168.36.11:31889 NodePort
68 192.168.36.11:31889/i NodePort 1 => 10.0.1.70:80
69 0.0.0.0:31889 NodePort
70 0.0.0.0:31889/i NodePort 1 => 10.0.1.70:80
71 10.107.233.130:20080 ClusterIP 1 => 10.0.1.149:80
2 => 10.0.0.121:80
72 10.107.233.130:20069 ClusterIP 1 => 10.0.1.149:69
2 => 10.0.0.121:69
73 192.0.2.233:20080 ExternalIPs 1 => 10.0.1.149:80
2 => 10.0.0.121:80
74 192.0.2.233:20069 ExternalIPs 1 => 10.0.1.149:69
2 => 10.0.0.121:69
75 10.0.2.15:30054 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
76 192.168.36.11:30054 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
77 0.0.0.0:30054 NodePort 1 => 10.0.1.149:80
2 => 10.0.0.121:80
78 10.0.2.15:31820 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
79 192.168.36.11:31820 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
80 0.0.0.0:31820 NodePort 1 => 10.0.1.149:69
2 => 10.0.0.121:69
81 192.168.36.11:20069 ExternalIPs 1 => 10.0.1.149:69
2 => 10.0.0.121:69
82 192.168.36.11:20080 ExternalIPs 1 => 10.0.1.149:80
2 => 10.0.0.121:80
Stderr:
cmd: kubectl exec -n kube-system cilium-h952n -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
412 Disabled Disabled 11378 k8s:io.cilium.k8s.policy.cluster=default fd02::bb 10.0.0.121 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
416 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
k8s:status=lockdown
reserved:host
1045 Disabled Disabled 4 reserved:health fd02::5e 10.0.0.180 ready
3099 Disabled Disabled 21475 k8s:io.cilium.k8s.policy.cluster=default fd02::a7 10.0.0.3 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
Stderr:
===================== Exiting AfterFailed =====================
03:27:32 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
03:27:33 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|38c16ee1_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Test_NodePort_with_netfilterCompatMode=true.zip]]
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1194/artifact/38c16ee1_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Test_NodePort_with_netfilterCompatMode=true.zip/38c16ee1_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_vxlan_Test_NodePort_with_netfilterCompatMode=true.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1194/artifact/test_results_Cilium-PR-K8s-1.16-net-next_1194_BDD-Test-PR.zip/test_results_Cilium-PR-K8s-1.16-net-next_1194_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1194/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!