-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeneeds/triageThis issue requires triaging to establish severity and next steps.This issue requires triaging to establish severity and next steps.
Description
Uploaded file:
2ba30b89_K8sServicesTest_Checks_service_across_nodes_Tests_NodePort_BPF_Tests_with_direct_routing_Tests_HostPort.zip
Stacktrace
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
k8s1 host can not connect to service "tftp://192.168.36.12:6969/hello"
Expected command: kubectl exec -n kube-system log-gatherer-xcwxq -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 8 tftp://192.168.36.12:6969/hello -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Stdout:
time-> DNS: '0.000022()', Connect: '0.000038',Transfer '0.000000', total '5.001587'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-Validated/k8s-1.11-gopath/src/github.com/cilium/cilium/test/k8sT/Services.go:657
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-fpcp5 cilium-xxts6]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
testds-bxn89
app1-798d4f944d-gt8pt
app3-68fb594d47-nxxmw
coredns-687db6485c-qcst5
testclient-25l9t
testclient-tmrnh
app2-dc85b4585-mwtdc
test-k8s2-848b6f7864-8p5xm
testds-2z4wc
Cilium agent 'cilium-fpcp5': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 23 Failed 0
Cilium agent 'cilium-xxts6': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0
Standard Error
STEP: Making 10 curl requests from k8s2 to "http://192.168.36.12:8080"
STEP: Making 10 curl requests from k8s2 to "tftp://192.168.36.12:6969/hello"
STEP: Making 10 curl requests from k8s1 to "http://192.168.36.12:8080"
STEP: Making 10 curl requests from k8s1 to "tftp://192.168.36.12:6969/hello"
=== Test Finished at 2020-03-23T15:03:34Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default test-k8s2-848b6f7864-8p5xm 2/2 Running 0 7m 10.10.1.82 k8s2 <none>
default testclient-25l9t 1/1 Running 0 7m 10.10.1.87 k8s2 <none>
default testclient-tmrnh 1/1 Running 0 7m 10.10.0.3 k8s1 <none>
default testds-2z4wc 2/2 Running 0 7m 10.10.1.50 k8s2 <none>
default testds-bxn89 2/2 Running 0 7m 10.10.0.39 k8s1 <none>
external-ips-test app1-798d4f944d-gt8pt 2/2 Running 0 57m 10.10.0.174 k8s1 <none>
external-ips-test app2-dc85b4585-mwtdc 2/2 Running 0 57m 10.10.0.150 k8s1 <none>
external-ips-test app3-68fb594d47-nxxmw 2/2 Running 0 57m 10.10.1.179 k8s2 <none>
external-ips-test host-client-f57qx 1/1 Running 0 57m 192.168.36.11 k8s1 <none>
external-ips-test host-client-pq5qn 1/1 Running 0 57m 192.168.36.12 k8s2 <none>
external-ips-test host-server-1-56c9467d4b-wkslh 2/2 Running 0 57m 192.168.36.11 k8s1 <none>
external-ips-test host-server-2-b8d89c58c-2gtds 2/2 Running 0 57m 192.168.36.11 k8s1 <none>
kube-system cilium-fpcp5 1/1 Running 0 3m 192.168.36.11 k8s1 <none>
kube-system cilium-operator-669595bd79-jl6pb 1/1 Running 0 3m 192.168.36.12 k8s2 <none>
kube-system cilium-xxts6 1/1 Running 0 3m 192.168.36.12 k8s2 <none>
kube-system coredns-687db6485c-qcst5 1/1 Running 0 3m 10.10.1.70 k8s2 <none>
kube-system etcd-k8s1 1/1 Running 0 1h 192.168.36.11 k8s1 <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 1h 192.168.36.11 k8s1 <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 1h 192.168.36.11 k8s1 <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 1h 192.168.36.11 k8s1 <none>
kube-system log-gatherer-8pttx 1/1 Running 0 1h 192.168.36.12 k8s2 <none>
kube-system log-gatherer-b6xb7 1/1 Running 0 1h 192.168.36.13 k8s3 <none>
kube-system log-gatherer-xcwxq 1/1 Running 0 1h 192.168.36.11 k8s1 <none>
kube-system registry-adder-9nx46 1/1 Running 0 1h 192.168.36.12 k8s2 <none>
kube-system registry-adder-nw2nh 1/1 Running 0 1h 192.168.36.13 k8s3 <none>
kube-system registry-adder-qcxxv 1/1 Running 0 1h 192.168.36.11 k8s1 <none>
Stderr:
Fetching command output from pods [cilium-fpcp5 cilium-xxts6]
cmd: kubectl exec -n kube-system cilium-fpcp5 -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.10:53 ClusterIP 1 => 10.10.1.70:53
2 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
3 10.96.106.15:82 ClusterIP 1 => 10.10.0.174:80
4 192.0.2.223:82 ExternalIPs 1 => 10.10.0.174:80
5 172.28.128.3:82 ExternalIPs 1 => 10.10.0.174:80
6 192.168.36.11:82 ExternalIPs 1 => 10.10.0.174:80
7 10.111.205.107:30002 ClusterIP 1 => 10.10.0.174:80
8 192.168.36.11:30002 ExternalIPs 1 => 10.10.0.174:80
9 192.0.2.223:30002 ExternalIPs 1 => 10.10.0.174:80
10 172.28.128.3:30002 ExternalIPs 1 => 10.10.0.174:80
11 10.103.71.137:83 ClusterIP 1 => 10.10.0.150:80
12 0.0.0.0:30003 NodePort 1 => 10.10.0.150:80
13 192.168.36.11:30005 NodePort 1 => 192.168.36.11:30006
14 10.10.0.72:30003 NodePort 1 => 10.10.0.150:80
15 10.96.52.158:84 ClusterIP 1 => 192.168.36.11:20004
16 10.10.0.72:30004 NodePort 1 => 192.168.36.11:20004
17 0.0.0.0:30004 NodePort 1 => 192.168.36.11:20004
18 192.168.36.11:30003 NodePort 1 => 10.10.0.150:80
19 10.108.76.177:85 ClusterIP 1 => 192.168.36.11:30006
20 0.0.0.0:30005 NodePort 1 => 192.168.36.11:30006
21 192.168.36.11:30004 NodePort 1 => 192.168.36.11:20004
22 10.10.0.72:30005 NodePort 1 => 192.168.36.11:30006
39 10.102.193.208:80 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
40 10.102.193.208:69 ClusterIP 1 => 10.10.1.50:69
2 => 10.10.0.39:69
41 10.111.98.12:10069 ClusterIP 1 => 10.10.1.50:69
2 => 10.10.0.39:69
42 10.111.98.12:10080 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
43 0.0.0.0:30549 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
44 192.168.36.11:30549 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
45 10.104.143.106:2379 ClusterIP
46 10.10.0.72:30549 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
47 0.0.0.0:30467 NodePort 1 => 10.10.1.50:69
2 => 10.10.0.39:69
48 192.168.36.11:30467 NodePort 1 => 10.10.1.50:69
2 => 10.10.0.39:69
49 10.10.0.72:30467 NodePort 1 => 10.10.1.50:69
2 => 10.10.0.39:69
50 10.109.228.117:10069 ClusterIP 1 => 10.10.1.50:69
2 => 10.10.0.39:69
51 10.109.228.117:10080 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
52 192.168.36.11:31146 NodePort 1 => 10.10.0.39:69
53 10.10.0.72:31146 NodePort 1 => 10.10.0.39:69
54 0.0.0.0:31146 NodePort 1 => 10.10.0.39:69
55 0.0.0.0:30932 NodePort 1 => 10.10.0.39:80
56 192.168.36.11:30932 NodePort 1 => 10.10.0.39:80
57 10.10.0.72:30932 NodePort 1 => 10.10.0.39:80
58 10.98.128.75:10069 ClusterIP 1 => 10.10.1.82:69
59 10.98.128.75:10080 ClusterIP 1 => 10.10.1.82:80
60 10.10.0.72:32205 NodePort
61 0.0.0.0:32205 NodePort
62 192.168.36.11:32205 NodePort
63 0.0.0.0:31219 NodePort
64 192.168.36.11:31219 NodePort
65 10.10.0.72:31219 NodePort
66 10.98.7.16:10080 ClusterIP 1 => 10.10.1.82:80
67 10.98.7.16:10069 ClusterIP 1 => 10.10.1.82:69
68 0.0.0.0:31912 NodePort 1 => 10.10.1.82:80
69 192.168.36.11:31912 NodePort 1 => 10.10.1.82:80
70 10.10.0.72:31912 NodePort 1 => 10.10.1.82:80
71 0.0.0.0:30396 NodePort 1 => 10.10.1.82:69
72 192.168.36.11:30396 NodePort 1 => 10.10.1.82:69
73 10.10.0.72:30396 NodePort 1 => 10.10.1.82:69
74 10.110.81.55:80 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
75 0.0.0.0:30636 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
76 192.168.36.11:30636 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
77 10.10.0.72:30636 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
78 10.106.94.198:80 ClusterIP 1 => 10.10.1.82:80
79 192.168.36.11:30070 NodePort
80 10.10.0.72:30070 NodePort
81 0.0.0.0:30070 NodePort
82 192.168.36.12:8080 HostPort 1 => 10.10.1.82:80
83 192.168.36.12:6969 HostPort 1 => 10.10.1.82:69
Stderr:
cmd: kubectl exec -n kube-system cilium-fpcp5 -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
12 Disabled Disabled 4089 k8s:id=app1 f00d::a0b:0:0:60eb 10.10.0.174 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=external-ips-test
163 Disabled Disabled 18474 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:9485 10.10.0.3 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
449 Disabled Disabled 12482 k8s:id=app2 f00d::a0b:0:0:a84f 10.10.0.150 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=external-ips-test
2850 Disabled Disabled 27308 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:b957 10.10.0.39 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
3568 Disabled Disabled 4 reserved:health f00d::a0b:0:0:4719 10.10.0.52 ready
Stderr:
cmd: kubectl exec -n kube-system cilium-xxts6 -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.10.1.70:53
3 10.96.106.15:82 ClusterIP 1 => 10.10.0.174:80
4 192.0.2.223:82 ExternalIPs 1 => 10.10.0.174:80
5 172.28.128.3:82 ExternalIPs 1 => 10.10.0.174:80
6 192.168.36.11:82 ExternalIPs 1 => 10.10.0.174:80
7 10.111.205.107:30002 ClusterIP 1 => 10.10.0.174:80
8 172.28.128.3:30002 ExternalIPs 1 => 10.10.0.174:80
9 192.168.36.11:30002 ExternalIPs 1 => 10.10.0.174:80
10 192.0.2.223:30002 ExternalIPs 1 => 10.10.0.174:80
11 10.103.71.137:83 ClusterIP 1 => 10.10.0.150:80
12 0.0.0.0:30003 NodePort 1 => 10.10.0.150:80
13 192.168.36.12:30003 NodePort 1 => 10.10.0.150:80
14 10.10.1.254:30003 NodePort 1 => 10.10.0.150:80
15 10.96.52.158:84 ClusterIP 1 => 192.168.36.11:20004
16 0.0.0.0:30004 NodePort 1 => 192.168.36.11:20004
17 192.168.36.12:30004 NodePort 1 => 192.168.36.11:20004
18 10.10.1.254:30004 NodePort 1 => 192.168.36.11:20004
19 10.108.76.177:85 ClusterIP 1 => 192.168.36.11:30006
20 0.0.0.0:30005 NodePort 1 => 192.168.36.11:30006
21 192.168.36.12:30005 NodePort 1 => 192.168.36.11:30006
22 10.10.1.254:30005 NodePort 1 => 192.168.36.11:30006
39 10.102.193.208:80 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
40 10.102.193.208:69 ClusterIP 1 => 10.10.1.50:69
2 => 10.10.0.39:69
41 10.111.98.12:10080 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
42 10.111.98.12:10069 ClusterIP 1 => 10.10.1.50:69
2 => 10.10.0.39:69
43 0.0.0.0:30549 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
44 192.168.36.12:30549 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
45 10.104.143.106:2379 ClusterIP
46 10.10.1.254:30549 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
47 0.0.0.0:30467 NodePort 1 => 10.10.1.50:69
2 => 10.10.0.39:69
48 192.168.36.12:30467 NodePort 1 => 10.10.1.50:69
2 => 10.10.0.39:69
49 10.10.1.254:30467 NodePort 1 => 10.10.1.50:69
2 => 10.10.0.39:69
50 10.109.228.117:10069 ClusterIP 1 => 10.10.1.50:69
2 => 10.10.0.39:69
51 10.109.228.117:10080 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
52 192.168.36.12:30932 NodePort 1 => 10.10.1.50:80
53 10.10.1.254:30932 NodePort 1 => 10.10.1.50:80
54 0.0.0.0:30932 NodePort 1 => 10.10.1.50:80
55 0.0.0.0:31146 NodePort 1 => 10.10.1.50:69
56 192.168.36.12:31146 NodePort 1 => 10.10.1.50:69
57 10.10.1.254:31146 NodePort 1 => 10.10.1.50:69
58 10.98.128.75:10080 ClusterIP 1 => 10.10.1.82:80
59 10.98.128.75:10069 ClusterIP 1 => 10.10.1.82:69
60 0.0.0.0:31219 NodePort 1 => 10.10.1.82:80
61 192.168.36.12:31219 NodePort 1 => 10.10.1.82:80
62 10.10.1.254:31219 NodePort 1 => 10.10.1.82:80
63 0.0.0.0:32205 NodePort 1 => 10.10.1.82:69
64 192.168.36.12:32205 NodePort 1 => 10.10.1.82:69
65 10.10.1.254:32205 NodePort 1 => 10.10.1.82:69
66 10.98.7.16:10080 ClusterIP 1 => 10.10.1.82:80
67 10.98.7.16:10069 ClusterIP 1 => 10.10.1.82:69
68 0.0.0.0:31912 NodePort 1 => 10.10.1.82:80
69 192.168.36.12:31912 NodePort 1 => 10.10.1.82:80
70 10.10.1.254:31912 NodePort 1 => 10.10.1.82:80
71 0.0.0.0:30396 NodePort 1 => 10.10.1.82:69
72 192.168.36.12:30396 NodePort 1 => 10.10.1.82:69
73 10.10.1.254:30396 NodePort 1 => 10.10.1.82:69
74 10.110.81.55:80 ClusterIP 1 => 10.10.1.50:80
2 => 10.10.0.39:80
75 0.0.0.0:30636 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
76 192.168.36.12:30636 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
77 10.10.1.254:30636 NodePort 1 => 10.10.1.50:80
2 => 10.10.0.39:80
78 10.106.94.198:80 ClusterIP 1 => 10.10.1.82:80
79 0.0.0.0:30070 NodePort 1 => 10.10.1.82:80
80 192.168.36.12:30070 NodePort 1 => 10.10.1.82:80
81 10.10.1.254:30070 NodePort 1 => 10.10.1.82:80
82 192.168.36.12:8080 HostPort 1 => 10.10.1.82:80
83 192.168.36.12:6969 HostPort 1 => 10.10.1.82:69
Stderr:
cmd: kubectl exec -n kube-system cilium-xxts6 -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
83 Disabled Disabled 55670 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:4174 10.10.1.70 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
503 Disabled Disabled 27308 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:79d0 10.10.1.50 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
686 Disabled Disabled 10426 k8s:id=app3 f00d::a0c:0:0:6fb1 10.10.1.179 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=external-ips-test
1736 Disabled Disabled 24008 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:97f4 10.10.1.82 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
2157 Disabled Disabled 4 reserved:health f00d::a0c:0:0:5623 10.10.1.2 ready
2508 Disabled Disabled 18474 k8s:io.cilium.k8s.policy.cluster=default f00d::a0c:0:0:4907 10.10.1.87 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
Stderr:
===================== Exiting AfterFailed =====================
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeneeds/triageThis issue requires triaging to establish severity and next steps.This issue requires triaging to establish severity and next steps.