-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeneeds/triageThis issue requires triaging to establish severity and next steps.This issue requires triaging to establish severity and next steps.
Description
CI failure
https://jenkins.cilium.io/job/Ginkgo-CI-Tests-Pipeline/6623/testReport/junit/Suite-k8s-1/10/K8sPolicyTest_Basic_Test_Validate_to_entities_policies_Validate_toEntities_Cluster/
https://jenkins.cilium.io/job/Ginkgo-CI-Tests-Pipeline/6621/
https://jenkins.cilium.io/job/Ginkgo-CI-Tests-Pipeline/6614/
Stacktrace
/home/jenkins/workspace/Ginkgo-CI-Tests-Pipeline/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:410
DNS connectivity of www.google.com from pod "app3-579cbb5fcd-xfg2m"
Expected command: kubectl exec -n default app3-579cbb5fcd-xfg2m -- host www.google.com
To succeed, but it failed:
Exitcode: 1
Stdout:
;; connection timed out; no servers could be reached
Stderr:
command terminated with exit code 1
/home/jenkins/workspace/Ginkgo-CI-Tests-Pipeline/src/github.com/cilium/cilium/test/k8sT/Policies.go:571
Standard Output
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-gktlp cilium-sxcm7]
Netpols loaded:
CiliumNetworkPolicies loaded: default::to-entities-cluster
Endpoint Policy Enforcement:
Pod Ingress Egress
etcd-operator-65476dd78f-rgctc false false
kube-dns-f4d788bb7-dnrsm false false
app1-66df65bccc-7pp55 false true
cilium-etcd-6rzpv86m68 false false
cilium-etcd-zfb9bpcwf2 false false
cilium-operator-5fb956fc65-4ck9b false false
app1-66df65bccc-2zwpk false true
app2-5c44ff87c-pqpqf false true
app3-579cbb5fcd-xfg2m false true
cilium-etcd-2c94lx95xh false false
Cilium agent 'cilium-gktlp': Status: Ok Health: Ok Nodes "k8s2 k8s1" ContinerRuntime: Ok Kubernetes: Ok KVstore: Ok Controllers: Total 31 Failed 0
Cilium agent 'cilium-sxcm7': Status: Ok Health: Ok Nodes "k8s2 k8s1" ContinerRuntime: Ok Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0
Standard Error
STEP: Installing toEntities Cluster
STEP: Verifying policy correctness
STEP: HTTP connectivity to google.com
STEP: ICMP connectivity to 8.8.8.8
STEP: DNS lookup of google.com
STEP: HTTP connectivity from pod to pod
STEP: HTTP connectivity to google.com
STEP: ICMP connectivity to 8.8.8.8
STEP: DNS lookup of google.com
=== Test Finished at 2019-04-15T08:36:10Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default app1-66df65bccc-2zwpk 1/1 Running 0 4m 10.10.0.134 k8s1
default app1-66df65bccc-7pp55 1/1 Running 0 4m 10.10.0.94 k8s1
default app2-5c44ff87c-pqpqf 1/1 Running 0 4m 10.10.0.156 k8s1
default app3-579cbb5fcd-xfg2m 1/1 Running 0 4m 10.10.0.139 k8s1
kube-system cilium-etcd-2c94lx95xh 1/1 Running 0 20m 10.10.0.37 k8s1
kube-system cilium-etcd-6rzpv86m68 1/1 Running 0 20m 10.10.1.41 k8s2
kube-system cilium-etcd-operator-77d4ddf8c6-xvddq 1/1 Running 0 22m 192.168.36.11 k8s1
kube-system cilium-etcd-zfb9bpcwf2 1/1 Running 0 19m 10.10.1.148 k8s2
kube-system cilium-gktlp 1/1 Running 0 5m 192.168.36.12 k8s2
kube-system cilium-operator-5fb956fc65-4ck9b 1/1 Running 0 22m 10.10.0.60 k8s1
kube-system cilium-sxcm7 1/1 Running 0 6m 192.168.36.11 k8s1
kube-system etcd-k8s1 1/1 Running 0 28m 192.168.36.11 k8s1
kube-system etcd-operator-65476dd78f-rgctc 1/1 Running 0 22m 10.10.1.185 k8s2
kube-system kube-apiserver-k8s1 1/1 Running 0 27m 192.168.36.11 k8s1
kube-system kube-controller-manager-k8s1 1/1 Running 0 28m 192.168.36.11 k8s1
kube-system kube-dns-f4d788bb7-dnrsm 3/3 Running 0 28m 10.10.1.73 k8s2
kube-system kube-proxy-5dnvk 1/1 Running 0 22m 192.168.36.12 k8s2
kube-system kube-proxy-qd75d 1/1 Running 0 28m 192.168.36.11 k8s1
kube-system kube-scheduler-k8s1 1/1 Running 0 28m 192.168.36.11 k8s1
Stderr:
cmd: kubectl exec -n kube-system cilium-gktlp -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Backend
1 10.96.0.1:443 1 => 192.168.36.11:6443
2 10.96.0.10:53 1 => 10.10.1.73:53
3 10.109.91.70:2379 1 => 10.10.1.41:2379
2 => 10.10.0.37:2379
3 => 10.10.1.148:2379
5 10.106.188.60:80 1 => 10.10.0.134:80
2 => 10.10.0.94:80
Stderr:
level=warning msg="Error reading default config: Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=daemon
cmd: kubectl exec -n kube-system cilium-gktlp -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
382 Disabled Disabled 101 k8s:app=etcd f00d::a0a:100:0:3029 10.10.1.148 ready
k8s:etcd_cluster=cilium-etcd
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.cilium/app=etcd-operator
k8s:io.kubernetes.pod.namespace=kube-system
630 Disabled Disabled 4 reserved:health f00d::a0a:100:0:6123 10.10.1.36 ready
2839 Disabled Disabled 101 k8s:app=etcd f00d::a0a:100:0:13c3 10.10.1.41 ready
k8s:etcd_cluster=cilium-etcd
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.cilium/app=etcd-operator
k8s:io.kubernetes.pod.namespace=kube-system
3170 Disabled Disabled 102 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:7e31 10.10.1.73 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
3745 Disabled Disabled 100 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:100:0:2ca8 10.10.1.185 ready
k8s:io.cilium.k8s.policy.serviceaccount=cilium-etcd-sa
k8s:io.cilium/app=etcd-operator
k8s:io.kubernetes.pod.namespace=kube-system
Stderr:
level=warning msg="Error reading default config: Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=daemon
cmd: kubectl exec -n kube-system cilium-sxcm7 -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Backend
1 10.96.0.1:443 1 => 192.168.36.11:6443
2 10.96.0.10:53 1 => 10.10.1.73:53
3 10.109.91.70:2379 1 => 10.10.0.37:2379
2 => 10.10.1.148:2379
3 => 10.10.1.41:2379
5 10.106.188.60:80 1 => 10.10.0.134:80
2 => 10.10.0.94:80
Stderr:
level=warning msg="Error reading default config: Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=daemon
cmd: kubectl exec -n kube-system cilium-sxcm7 -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
43 Disabled Enabled 55289 k8s:id=app1 f00d::a0a:0:0:7021 10.10.0.134 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
193 Disabled Disabled 4 reserved:health f00d::a0a:0:0:6448 10.10.0.119 ready
931 Disabled Enabled 55289 k8s:id=app1 f00d::a0a:0:0:34cf 10.10.0.94 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
2963 Disabled Enabled 10868 k8s:appSecond=true f00d::a0a:0:0:68d3 10.10.0.156 ready
k8s:id=app2
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
3703 Disabled Disabled 105 k8s:io.cilium.k8s.policy.cluster=default f00d::a0a:0:0:82d7 10.10.0.60 ready
k8s:io.cilium.k8s.policy.serviceaccount=cilium-operator
k8s:io.cilium/app=operator
k8s:io.kubernetes.pod.namespace=kube-system
k8s:name=cilium-operator
3850 Disabled Disabled 101 k8s:app=etcd f00d::a0a:0:0:6ca 10.10.0.37 ready
k8s:etcd_cluster=cilium-etcd
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.cilium/app=etcd-operator
k8s:io.kubernetes.pod.namespace=kube-system
3988 Disabled Enabled 1222 k8s:id=app3 f00d::a0a:0:0:ceb7 10.10.0.139 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
Stderr:
level=warning msg="Error reading default config: Config File \"ciliumd\" Not Found in \"[/root]\"" subsys=daemon
dae5dde7_K8sPolicyTest_Basic_Test_Validate_to-entities_policies_Validate_toEntities_Cluster.zip
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeneeds/triageThis issue requires triaging to establish severity and next steps.This issue requires triaging to establish severity and next steps.