-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sBookInfoDemoTest Bookinfo Demo Tests bookinfo demo
Failure Output
FAIL: DNS entry is not ready after timeout
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
DNS entry is not ready after timeout
Expected
<*errors.errorString | 0xc001098b40>: {
s: "DNS 'productpage.default.svc.cluster.local' is not ready after timeout: 7m0s timeout expired",
}
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/bookinfo.go:161
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Unable to restore endpoint, ignoring
Cilium pods: [cilium-2lxlb cilium-9tvff]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
coredns-5fc58db489-qg8c8
details-v1-fcf5ddfbb-rxw49
productpage-v1-56cbdf9897-2c524
ratings-v1-d9b7b74b-twnqr
reviews-v1-59dccf6f9b-dw77w
reviews-v2-77946fdc74-7wfwn
Cilium agent 'cilium-2lxlb': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 20 Failed 0
Cilium agent 'cilium-9tvff': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0
Standard Error
Click to show.
21:51:03 STEP: Running BeforeAll block for EntireTestsuite K8sBookInfoDemoTest
21:51:03 STEP: Ensuring the namespace kube-system exists
21:51:03 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
21:51:03 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
21:51:04 STEP: Running BeforeAll block for EntireTestsuite K8sBookInfoDemoTest Bookinfo Demo
21:51:04 STEP: Installing Cilium
21:51:05 STEP: Waiting for Cilium to become ready
21:51:57 STEP: Validating if Kubernetes DNS is deployed
21:51:57 STEP: Checking if deployment is ready
21:51:57 STEP: Checking if kube-dns service is plumbed correctly
21:51:57 STEP: Checking if DNS can resolve
21:51:57 STEP: Checking if pods have identity
21:51:58 STEP: Kubernetes DNS is up and operational
21:51:58 STEP: Validating Cilium Installation
21:51:58 STEP: Performing Cilium status preflight check
21:51:58 STEP: Performing Cilium controllers preflight check
21:51:58 STEP: Performing Cilium health check
21:52:01 STEP: Performing Cilium service preflight check
21:52:01 STEP: Performing K8s service preflight check
21:52:02 STEP: Waiting for cilium-operator to be ready
21:52:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
21:52:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
21:52:02 STEP: Creating objects in file "/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/bookinfo-v1.yaml"
21:52:02 STEP: Creating objects in file "/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/bookinfo-v2.yaml"
21:52:02 STEP: Waiting for pods to be ready
21:52:02 STEP: WaitforPods(namespace="default", filter="-l zgroup=bookinfo")
21:52:19 STEP: WaitforPods(namespace="default", filter="-l zgroup=bookinfo") => <nil>
21:52:21 STEP: Waiting for services to be ready
21:52:21 STEP: Validating DNS without Policy
FAIL: DNS entry is not ready after timeout
Expected
<*errors.errorString | 0xc001098b40>: {
s: "DNS 'productpage.default.svc.cluster.local' is not ready after timeout: 7m0s timeout expired",
}
to be nil
=== Test Finished at 2021-10-04T21:59:21Z====
21:59:21 STEP: Running JustAfterEach block for EntireTestsuite K8sBookInfoDemoTest
===================== TEST FAILED =====================
21:59:21 STEP: Running AfterFailed block for EntireTestsuite K8sBookInfoDemoTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-7fd557d749-th27m 0/1 Running 0 109m 10.0.0.104 k8s1 <none> <none>
cilium-monitoring prometheus-d87f8f984-hrfs2 1/1 Running 0 109m 10.0.0.121 k8s1 <none> <none>
default details-v1-fcf5ddfbb-rxw49 1/1 Running 0 7m22s 10.0.1.13 k8s2 <none> <none>
default productpage-v1-56cbdf9897-2c524 1/1 Running 0 7m22s 10.0.1.218 k8s2 <none> <none>
default ratings-v1-d9b7b74b-twnqr 1/1 Running 0 7m22s 10.0.1.46 k8s2 <none> <none>
default reviews-v1-59dccf6f9b-dw77w 1/1 Running 0 7m22s 10.0.1.108 k8s2 <none> <none>
default reviews-v2-77946fdc74-7wfwn 1/1 Running 0 7m22s 10.0.1.157 k8s2 <none> <none>
kube-system cilium-2lxlb 1/1 Running 0 8m19s 192.168.36.11 k8s1 <none> <none>
kube-system cilium-9tvff 1/1 Running 0 8m19s 192.168.36.12 k8s2 <none> <none>
kube-system cilium-operator-5856f85fbb-8qnnt 1/1 Running 0 8m19s 192.168.36.11 k8s1 <none> <none>
kube-system cilium-operator-5856f85fbb-954cw 1/1 Running 0 8m19s 192.168.36.12 k8s2 <none> <none>
kube-system coredns-5fc58db489-qg8c8 1/1 Running 0 50m 10.0.1.177 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 112m 192.168.36.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 112m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 112m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 112m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-4nmtz 1/1 Running 0 109m 192.168.36.12 k8s2 <none> <none>
kube-system log-gatherer-4nvmn 1/1 Running 0 109m 192.168.36.13 k8s3 <none> <none>
kube-system log-gatherer-mlxjp 1/1 Running 0 109m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-6xrfd 1/1 Running 0 110m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-9v4ln 1/1 Running 0 110m 192.168.36.13 k8s3 <none> <none>
kube-system registry-adder-hb8br 1/1 Running 0 110m 192.168.36.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-2lxlb cilium-9tvff]
cmd: kubectl exec -n kube-system cilium-2lxlb -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
724 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
reserved:host
1176 Disabled Disabled 4 reserved:health fd02::bc 10.0.0.198 ready
Stderr:
cmd: kubectl exec -n kube-system cilium-9tvff -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
713 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
754 Disabled Disabled 26699 k8s:app=productpage fd02::1e7 10.0.1.218 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:track=stable
k8s:version=v1
k8s:zgroup=bookinfo
1240 Disabled Disabled 26043 k8s:io.cilium.k8s.policy.cluster=default fd02::1c9 10.0.1.177 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1609 Disabled Disabled 18798 k8s:app=reviews fd02::1cc 10.0.1.108 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:track=stable
k8s:version=v1
k8s:zgroup=bookinfo
2190 Disabled Disabled 4 reserved:health fd02::1d9 10.0.1.245 ready
2823 Disabled Disabled 24512 k8s:app=reviews fd02::1d4 10.0.1.157 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:version=v2
k8s:zgroup=bookinfo
3377 Disabled Disabled 1354 k8s:app=ratings fd02::195 10.0.1.46 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:version=v1
k8s:zgroup=bookinfo
4086 Disabled Disabled 11789 k8s:app=details fd02::1a1 10.0.1.13 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:track=stable
k8s:version=v1
k8s:zgroup=bookinfo
Stderr:
===================== Exiting AfterFailed =====================
22:00:37 STEP: Running AfterEach for block EntireTestsuite K8sBookInfoDemoTest
22:00:37 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|580ebe2b_K8sBookInfoDemoTest_Bookinfo_Demo_Tests_bookinfo_demo.zip]]
22:00:41 STEP: Running AfterAll block for EntireTestsuite K8sBookInfoDemoTest Bookinfo Demo
22:00:41 STEP: Deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/bookinfo-v1.yaml
22:00:41 STEP: Deleting resource /home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/bookinfo-v2.yaml
22:00:42 STEP: Running AfterAll block for EntireTestsuite K8sBookInfoDemoTest
22:00:42 STEP: Removing Cilium installation using generated helm manifest
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1580/artifact/580ebe2b_K8sBookInfoDemoTest_Bookinfo_Demo_Tests_bookinfo_demo.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1580/artifact/test_results_Cilium-PR-K8s-1.16-net-next_1580_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1580/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!