-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Description
Test Name
K8sUpdates Tests upgrade and downgrade from a Cilium stable image to master
Failure Output
FAIL: Cannot curl app1-service
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE@3/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Cannot curl app1-service
Expected command: kubectl exec -n default app2-58757b7dd5-wkq9n -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://app1-service/public -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000000()', Connect: '0.000000',Transfer '0.000000', total '5.004598'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-GKE@3/src/github.com/cilium/cilium/test/k8sT/Updates.go:370
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 22
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Unable to update ipcache map entry on pod add
Cilium pods: [cilium-4256m cilium-wm6sx]
Netpols loaded:
CiliumNetworkPolicies loaded: default::l7-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
migrate-svc-client-82x8l
migrate-svc-server-ptn9m
migrate-svc-client-bxnlz
migrate-svc-client-gxf7l
migrate-svc-client-r7rx5
migrate-svc-server-4j65s
app1-786c6d794d-g9c74
app1-786c6d794d-mgqwn
app3-5d69599cdd-7zb62
migrate-svc-client-9sgbc
migrate-svc-server-qzn8b
kube-dns-b4f5c58c7-lwgzj
kube-dns-b4f5c58c7-c2m4r
app2-58757b7dd5-wkq9n
Cilium agent 'cilium-4256m': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 51 Failed 0
Cilium agent 'cilium-wm6sx': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0
Standard Error
Click to show.
18:30:12 STEP: Running BeforeAll block for EntireTestsuite K8sUpdates
18:30:12 STEP: Ensuring the namespace kube-system exists
18:30:13 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
18:30:13 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
18:30:58 STEP: Deleting Cilium and CoreDNS...
18:31:04 STEP: Waiting for pods to be terminated..
18:31:05 STEP: Cleaning Cilium state (238e1e326ccb2b76456d7365692660eb06ba4bfb)
18:31:05 STEP: Cleaning up Cilium components
18:31:15 STEP: Waiting for Cilium to become ready
18:31:31 STEP: Cleaning Cilium state (v1.9)
18:31:31 STEP: Cleaning up Cilium components
18:31:43 STEP: Waiting for Cilium to become ready
18:32:11 STEP: Deploying Cilium 1.9-dev
18:32:17 STEP: Waiting for Cilium to become ready
18:32:54 STEP: Validating Cilium Installation
18:32:54 STEP: Performing Cilium controllers preflight check
18:32:54 STEP: Performing Cilium status preflight check
18:32:54 STEP: Performing Cilium health check
18:32:58 STEP: Performing Cilium service preflight check
18:32:58 STEP: Performing K8s service preflight check
18:32:58 STEP: Waiting for cilium-operator to be ready
18:32:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:32:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:32:58 STEP: Cilium "1.9-dev" is installed and running
18:32:58 STEP: Restarting DNS Pods
18:33:37 STEP: Waiting for kube-dns to be ready
18:33:37 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
18:33:37 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
18:33:37 STEP: Running kube-dns preflight check
18:33:41 STEP: Performing K8s service preflight check
18:33:41 STEP: Creating some endpoints and L7 policy
18:33:43 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp")
18:33:54 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp") => <nil>
18:34:01 STEP: Creating service and clients for migration
18:34:02 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server")
18:34:09 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server") => <nil>
18:34:10 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client")
18:34:14 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client") => <nil>
18:34:14 STEP: Validate that endpoints are ready before making any connection
18:34:16 STEP: Waiting for kube-dns to be ready
18:34:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
18:34:16 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
18:34:16 STEP: Running kube-dns preflight check
18:34:20 STEP: Performing K8s service preflight check
18:34:23 STEP: Making L7 requests between endpoints
FAIL: Cannot curl app1-service
Expected command: kubectl exec -n default app2-58757b7dd5-wkq9n -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://app1-service/public -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000000()', Connect: '0.000000',Transfer '0.000000', total '5.004598'
Stderr:
command terminated with exit code 28
=== Test Finished at 2021-10-07T18:34:29Z====
18:34:29 STEP: Running JustAfterEach block for EntireTestsuite K8sUpdates
===================== TEST FAILED =====================
18:34:30 STEP: Running AfterFailed block for EntireTestsuite K8sUpdates
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
202110071758k8sclicliidentityclitestingtestciliumbpfmetricslist app1-5c856d5c47-ctctm 2/2 Running 0 35m 10.84.2.15 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
202110071758k8sclicliidentityclitestingtestciliumbpfmetricslist app1-5c856d5c47-xp28r 2/2 Running 0 35m 10.84.2.81 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
202110071758k8sclicliidentityclitestingtestciliumbpfmetricslist app2-58757b7dd5-97zvb 1/1 Running 0 35m 10.84.2.26 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
202110071758k8sclicliidentityclitestingtestciliumbpfmetricslist app3-5d69599cdd-vsj4h 1/1 Running 0 35m 10.84.2.2 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
cilium-monitoring grafana-d69c97b9b-dhvgj 1/1 Running 1 39m 10.84.2.8 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
cilium-monitoring prometheus-655fb888d7-pqwsb 1/1 Running 1 39m 10.84.2.9 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
default app1-786c6d794d-g9c74 2/2 Running 0 52s 10.84.2.240 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
default app1-786c6d794d-mgqwn 2/2 Running 0 52s 10.84.2.78 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
default app2-58757b7dd5-wkq9n 1/1 Running 0 52s 10.84.2.1 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
default app3-5d69599cdd-7zb62 1/1 Running 0 51s 10.84.2.87 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
default migrate-svc-client-82x8l 1/1 Running 0 24s 10.84.1.94 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
default migrate-svc-client-9sgbc 1/1 Running 0 24s 10.84.1.246 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
default migrate-svc-client-bxnlz 1/1 Running 0 24s 10.84.1.253 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
default migrate-svc-client-gxf7l 1/1 Running 0 24s 10.84.1.13 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
default migrate-svc-client-r7rx5 1/1 Running 0 24s 10.84.2.76 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
default migrate-svc-server-4j65s 1/1 Running 0 32s 10.84.1.211 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
default migrate-svc-server-ptn9m 1/1 Running 0 32s 10.84.1.221 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
default migrate-svc-server-qzn8b 1/1 Running 0 32s 10.84.1.129 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system cilium-4256m 1/1 Running 0 2m18s 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system cilium-node-init-47g28 1/1 Running 0 2m18s 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system cilium-node-init-7bqrs 1/1 Running 0 2m18s 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system cilium-operator-d5db99668-hckff 1/1 Running 0 2m18s 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system cilium-operator-d5db99668-hncph 1/1 Running 0 2m18s 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system cilium-wm6sx 1/1 Running 0 2m18s 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system event-exporter-gke-67986489c8-tt5js 2/2 Running 0 38m 10.84.1.6 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system fluentbit-gke-95v85 2/2 Running 0 40m 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system fluentbit-gke-h8b7f 2/2 Running 0 40m 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system gke-metrics-agent-c9xlj 1/1 Running 0 40m 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system gke-metrics-agent-d5d4c 1/1 Running 0 40m 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system kube-dns-autoscaler-58cbd4f75c-qn9kv 1/1 Running 0 38m 10.84.2.151 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system kube-dns-b4f5c58c7-c2m4r 4/4 Running 0 96s 10.84.1.70 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system kube-dns-b4f5c58c7-lwgzj 4/4 Running 0 96s 10.84.2.106 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system kube-proxy-gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr 1/1 Running 0 40m 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system kube-proxy-gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 1/1 Running 0 40m 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system l7-default-backend-66579f5d7-4mcmv 1/1 Running 0 38m 10.84.2.47 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system log-gatherer-g78j8 1/1 Running 0 40m 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system log-gatherer-xdxxw 1/1 Running 0 40m 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system metrics-server-v0.3.6-6c47ffd7d7-tnr5f 2/2 Running 0 38m 10.84.2.174 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
kube-system pdcsi-node-jxgv7 2/2 Running 0 40m 10.128.0.35 gke-cilium-ci-4-cilium-ci-4-bd03f53f-xf17 <none> <none>
kube-system pdcsi-node-jzspf 2/2 Running 0 40m 10.128.0.34 gke-cilium-ci-4-cilium-ci-4-bd03f53f-ctsr <none> <none>
Stderr:
Fetching command output from pods [cilium-4256m cilium-wm6sx]
cmd: kubectl exec -n kube-system cilium-4256m -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
958 Disabled Disabled 52561 k8s:app=migrate-svc-client 10.84.1.253 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
975 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
k8s:cloud.google.com/gke-boot-disk=pd-standard
k8s:cloud.google.com/gke-container-runtime=containerd
k8s:cloud.google.com/gke-nodepool=cilium-ci-4
k8s:cloud.google.com/gke-os-distribution=cos
k8s:cloud.google.com/machine-family=n1
k8s:node.kubernetes.io/instance-type=n1-standard-4
k8s:topology.gke.io/zone=us-west1-b
k8s:topology.kubernetes.io/region=us-west1
k8s:topology.kubernetes.io/zone=us-west1-b
reserved:host
1395 Disabled Disabled 37267 k8s:app=migrate-svc-server 10.84.1.129 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
1536 Disabled Disabled 52561 k8s:app=migrate-svc-client 10.84.1.246 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
1851 Disabled Disabled 37267 k8s:app=migrate-svc-server 10.84.1.221 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
2181 Disabled Disabled 4 reserved:health 10.84.1.217 ready
2194 Disabled Disabled 52561 k8s:app=migrate-svc-client 10.84.1.13 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3091 Disabled Disabled 52561 k8s:app=migrate-svc-client 10.84.1.94 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3762 Disabled Disabled 24698 k8s:io.cilium.k8s.policy.cluster=default 10.84.1.70 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
3951 Disabled Disabled 37267 k8s:app=migrate-svc-server 10.84.1.211 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
Stderr:
cmd: kubectl exec -n kube-system cilium-wm6sx -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
286 Disabled Disabled 63923 k8s:appSecond=true 10.84.2.1 ready
k8s:id=app2
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
339 Enabled Disabled 38330 k8s:id=app1 10.84.2.240 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
390 Disabled Disabled 3060 k8s:id=app3 10.84.2.87 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1114 Enabled Disabled 38330 k8s:id=app1 10.84.2.78 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1522 Disabled Disabled 24698 k8s:io.cilium.k8s.policy.cluster=default 10.84.2.106 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1996 Disabled Disabled 52561 k8s:app=migrate-svc-client 10.84.2.76 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
2352 Disabled Disabled 4 reserved:health 10.84.2.229 ready
3202 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:cloud.google.com/gke-boot-disk=pd-standard
k8s:cloud.google.com/gke-container-runtime=containerd
k8s:cloud.google.com/gke-nodepool=cilium-ci-4
k8s:cloud.google.com/gke-os-distribution=cos
k8s:cloud.google.com/machine-family=n1
k8s:node.kubernetes.io/instance-type=n1-standard-4
k8s:topology.gke.io/zone=us-west1-b
k8s:topology.kubernetes.io/region=us-west1
k8s:topology.kubernetes.io/zone=us-west1-b
reserved:host
Stderr:
===================== Exiting AfterFailed =====================
18:35:48 STEP: Running AfterEach for block EntireTestsuite K8sUpdates
18:36:26 STEP: Cleaning up Cilium components
18:36:43 STEP: Waiting for Cilium to become ready
18:37:08 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|524dd04c_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip]]
18:37:08 STEP: Running AfterAll block for EntireTestsuite K8sUpdates
18:37:09 STEP: Cleaning up Cilium components
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6593/artifact/src/github.com/cilium/cilium/524dd04c_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6593/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_6593_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6593/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
524dd04c_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.