-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Closed
Copy link
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sUpdates Tests upgrade and downgrade from a Cilium stable image to master
Failure Output
FAIL: migrate-svc restart count values do not match
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
migrate-svc restart count values do not match
Expected
<int>: 0
to be identical to
<int>: 2
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-5.4/src/github.com/cilium/cilium/test/k8s/updates.go:321
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️ Number of "context deadline exceeded" in logs: 6
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-qbk8r cilium-qhp7x]
Netpols loaded:
CiliumNetworkPolicies loaded: default::l7-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
grafana-5747bcc8f9-jzkkn
app1-6bf9bf9bd5-5qgzl
app2-58757b7dd5-x9lsp
migrate-svc-client-ksdlk
migrate-svc-client-dr865
migrate-svc-client-rmzs2
migrate-svc-server-wm8hg
coredns-69b675786c-2xk47
prometheus-655fb888d7-x9ll6
app1-6bf9bf9bd5-jcccm
app3-5d69599cdd-sqlbn
migrate-svc-client-4gbc7
migrate-svc-client-l4g9t
migrate-svc-server-tlqpc
migrate-svc-server-vqq49
Cilium agent 'cilium-qbk8r': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 52 Failed 0
Cilium agent 'cilium-qhp7x': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 67 Failed 0
Standard Error
Click to show.
19:32:48 STEP: Running BeforeAll block for EntireTestsuite K8sUpdates
19:32:49 STEP: Ensuring the namespace kube-system exists
19:32:49 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
19:32:50 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
19:32:54 STEP: Waiting for pods to be terminated
19:32:54 STEP: Deleting Cilium and CoreDNS
19:32:54 STEP: Waiting for pods to be terminated
19:32:55 STEP: Cleaning Cilium state (daa801c83132e55e36d10b136a0ec2affd4c8523)
19:32:55 STEP: Cleaning up Cilium components
19:33:02 STEP: Waiting for Cilium to become ready
19:34:01 STEP: Cleaning Cilium state (v1.11)
19:34:01 STEP: Cleaning up Cilium components
19:34:08 STEP: Waiting for Cilium to become ready
19:35:11 STEP: Deploying Cilium 1.11
19:35:17 STEP: Waiting for Cilium to become ready
19:36:48 STEP: Validating Cilium Installation
19:36:48 STEP: Performing Cilium controllers preflight check
19:36:48 STEP: Performing Cilium status preflight check
19:36:48 STEP: Performing Cilium health check
19:36:57 STEP: Performing Cilium service preflight check
19:36:57 STEP: Performing K8s service preflight check
19:36:57 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hswrz': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
19:36:57 STEP: Performing Cilium controllers preflight check
19:36:57 STEP: Performing Cilium health check
19:36:57 STEP: Performing Cilium status preflight check
19:37:06 STEP: Performing Cilium service preflight check
19:37:06 STEP: Performing K8s service preflight check
19:37:06 STEP: Performing Cilium status preflight check
19:37:06 STEP: Performing Cilium controllers preflight check
19:37:06 STEP: Performing Cilium health check
19:37:14 STEP: Performing Cilium service preflight check
19:37:14 STEP: Performing K8s service preflight check
19:37:14 STEP: Performing Cilium controllers preflight check
19:37:14 STEP: Performing Cilium health check
19:37:14 STEP: Performing Cilium status preflight check
19:37:23 STEP: Performing Cilium service preflight check
19:37:23 STEP: Performing K8s service preflight check
19:37:23 STEP: Performing Cilium controllers preflight check
19:37:23 STEP: Performing Cilium health check
19:37:23 STEP: Performing Cilium status preflight check
19:37:31 STEP: Performing Cilium service preflight check
19:37:31 STEP: Performing K8s service preflight check
19:37:31 STEP: Performing Cilium status preflight check
19:37:31 STEP: Performing Cilium controllers preflight check
19:37:31 STEP: Performing Cilium health check
19:37:40 STEP: Performing Cilium service preflight check
19:37:40 STEP: Performing K8s service preflight check
19:37:40 STEP: Performing Cilium controllers preflight check
19:37:40 STEP: Performing Cilium health check
19:37:40 STEP: Performing Cilium status preflight check
19:37:49 STEP: Performing Cilium service preflight check
19:37:49 STEP: Performing K8s service preflight check
19:37:49 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hswrz': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
19:37:49 STEP: Performing Cilium controllers preflight check
19:37:49 STEP: Performing Cilium status preflight check
19:37:49 STEP: Performing Cilium health check
19:37:58 STEP: Performing Cilium service preflight check
19:37:58 STEP: Performing K8s service preflight check
19:37:58 STEP: Performing Cilium controllers preflight check
19:37:58 STEP: Performing Cilium status preflight check
19:37:58 STEP: Performing Cilium health check
19:38:07 STEP: Performing Cilium service preflight check
19:38:07 STEP: Performing K8s service preflight check
19:38:07 STEP: Performing Cilium controllers preflight check
19:38:07 STEP: Performing Cilium status preflight check
19:38:07 STEP: Performing Cilium health check
19:38:14 STEP: Performing Cilium service preflight check
19:38:14 STEP: Performing K8s service preflight check
19:38:14 STEP: Performing Cilium controllers preflight check
19:38:14 STEP: Performing Cilium health check
19:38:14 STEP: Performing Cilium status preflight check
19:38:24 STEP: Performing Cilium service preflight check
19:38:24 STEP: Performing K8s service preflight check
19:38:24 STEP: Performing Cilium controllers preflight check
19:38:24 STEP: Performing Cilium status preflight check
19:38:24 STEP: Performing Cilium health check
19:38:33 STEP: Performing Cilium service preflight check
19:38:33 STEP: Performing K8s service preflight check
19:38:33 STEP: Performing Cilium controllers preflight check
19:38:33 STEP: Performing Cilium health check
19:38:33 STEP: Performing Cilium status preflight check
19:38:43 STEP: Performing Cilium service preflight check
19:38:43 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hswrz': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
19:38:43 STEP: Performing Cilium controllers preflight check
19:38:43 STEP: Performing Cilium health check
19:38:43 STEP: Performing Cilium status preflight check
19:38:50 STEP: Performing Cilium service preflight check
19:38:50 STEP: Performing K8s service preflight check
19:38:50 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-hswrz': controller cilium-health-ep is failing: Exitcode: 0
Stdout:
KVStore: Ok Disabled
Kubernetes: Ok 1.21 (v1.21.9) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Strict [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe26:39f, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11 (Direct Routing)]
Host firewall: Disabled
Cilium: Ok 1.11.1 (v1.11.1-7c4086c)
NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: BPF [enp0s16, enp0s3, enp0s8] 10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
Controller Status: 33/33 healthy
Name Last success Last error Count Message
bpf-map-sync-cilium_lxc 3s ago never 0 no error
cilium-health-ep 1s ago 4s ago 0 no error
dns-garbage-collector-job 18s ago never 0 no error
endpoint-1115-regeneration-recovery never never 0 no error
endpoint-1320-regeneration-recovery never never 0 no error
endpoint-1676-regeneration-recovery never never 0 no error
endpoint-3648-regeneration-recovery never never 0 no error
endpoint-598-regeneration-recovery never never 0 no error
endpoint-gc 3m19s ago never 0 no error
ipcache-inject-labels 3m11s ago 3m12s ago 0 no error
k8s-heartbeat 19s ago never 0 no error
mark-k8s-node-as-available 2m5s ago never 0 no error
metricsmap-bpf-prom-sync 3s ago never 0 no error
neighbor-table-refresh 5s ago never 0 no error
resolve-identity-1115 17s ago never 0 no error
resolve-identity-1320 4s ago never 0 no error
resolve-identity-1676 20s ago never 0 no error
resolve-identity-3648 2m5s ago never 0 no error
resolve-identity-598 25s ago never 0 no error
sync-endpoints-and-host-ips 5s ago never 0 no error
sync-lb-maps-with-k8s-services 2m5s ago never 0 no error
sync-policymap-1115 13s ago never 0 no error
sync-policymap-1320 1s ago never 0 no error
sync-policymap-1676 14s ago never 0 no error
sync-policymap-3648 4s ago never 0 no error
sync-policymap-598 18s ago never 0 no error
sync-to-k8s-ciliumendpoint (1115) 7s ago never 0 no error
sync-to-k8s-ciliumendpoint (1320) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (1676) 9s ago never 0 no error
sync-to-k8s-ciliumendpoint (3648) 5s ago never 0 no error
sync-to-k8s-ciliumendpoint (598) 5s ago never 0 no error
template-dir-watcher never never 0 no error
update-k8s-node-annotations 3m12s ago never 0 no error
Proxy Status: OK, ip 10.0.1.209, 0 redirects active on ports 10000-20000
Hubble: Ok Current/Max Flows: 180/65535 (0.27%), Flows/s: 0.84 Metrics: Disabled
Encryption: Disabled
Cluster health: 2/2 reachable (2022-02-17T19:38:46Z)
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
19:38:50 STEP: Performing Cilium controllers preflight check
19:38:50 STEP: Performing Cilium status preflight check
19:38:50 STEP: Performing Cilium health check
19:38:56 STEP: Performing Cilium service preflight check
19:38:56 STEP: Performing K8s service preflight check
19:38:56 STEP: Waiting for cilium-operator to be ready
19:38:56 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
19:38:57 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
19:38:57 STEP: Cilium "1.11" is installed and running
19:38:57 STEP: Restarting DNS Pods
19:39:10 STEP: Waiting for kube-dns to be ready
19:39:10 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
19:39:11 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
19:39:11 STEP: Running kube-dns preflight check
19:39:16 STEP: Performing K8s service preflight check
19:39:16 STEP: Creating some endpoints and L7 policy
19:39:20 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp")
19:39:35 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp") => <nil>
19:39:42 STEP: Creating service and clients for migration
19:39:43 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server")
19:39:50 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-server") => <nil>
19:39:51 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client")
19:40:00 STEP: WaitforPods(namespace="default", filter="-l app=migrate-svc-client") => <nil>
19:40:00 STEP: Validate that endpoints are ready before making any connection
19:40:03 STEP: Waiting for kube-dns to be ready
19:40:03 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
19:40:03 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
19:40:03 STEP: Running kube-dns preflight check
19:40:08 STEP: Performing K8s service preflight check
19:40:12 STEP: Making L7 requests between endpoints
19:40:13 STEP: No interrupts in migrated svc flows
19:40:13 STEP: Install Cilium pre-flight check DaemonSet
19:40:19 STEP: Waiting for all cilium pre-flight pods to be ready
19:40:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-pre-flight-check")
19:40:34 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-pre-flight-check") => <nil>
19:40:34 STEP: Removing Cilium pre-flight check DaemonSet
19:40:35 STEP: Waiting for Cilium to become ready
19:40:36 STEP: Upgrading Cilium to 1.11.90
19:40:45 STEP: Validating pods have the right image version upgraded
19:40:51 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
19:42:21 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
19:42:21 STEP: Checking that installed image is "daa801c83132e55e36d10b136a0ec2affd4c8523"
19:42:21 STEP: Waiting for Cilium to become ready
19:42:21 STEP: Validating Cilium Installation
19:42:21 STEP: Performing Cilium status preflight check
19:42:21 STEP: Performing Cilium controllers preflight check
19:42:21 STEP: Performing Cilium health check
19:42:28 STEP: Performing Cilium service preflight check
19:42:28 STEP: Performing K8s service preflight check
19:42:28 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-52cxb': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
19:42:28 STEP: Performing Cilium controllers preflight check
19:42:28 STEP: Performing Cilium health check
19:42:28 STEP: Performing Cilium status preflight check
19:42:35 STEP: Performing Cilium service preflight check
19:42:35 STEP: Performing K8s service preflight check
19:42:35 STEP: Performing Cilium controllers preflight check
19:42:35 STEP: Performing Cilium health check
19:42:35 STEP: Performing Cilium status preflight check
19:42:42 STEP: Performing Cilium service preflight check
19:42:42 STEP: Performing K8s service preflight check
19:42:42 STEP: Performing Cilium controllers preflight check
19:42:42 STEP: Performing Cilium health check
19:42:42 STEP: Performing Cilium status preflight check
19:42:49 STEP: Performing Cilium service preflight check
19:42:49 STEP: Performing K8s service preflight check
19:42:49 STEP: Performing Cilium controllers preflight check
19:42:49 STEP: Performing Cilium status preflight check
19:42:49 STEP: Performing Cilium health check
19:42:57 STEP: Performing Cilium service preflight check
19:42:57 STEP: Performing K8s service preflight check
19:42:57 STEP: Performing Cilium status preflight check
19:42:57 STEP: Performing Cilium controllers preflight check
19:42:57 STEP: Performing Cilium health check
19:43:03 STEP: Performing Cilium service preflight check
19:43:03 STEP: Performing K8s service preflight check
19:43:29 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-52cxb': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": context deadline exceeded
command terminated with exit code 1
19:43:29 STEP: Performing Cilium controllers preflight check
19:43:29 STEP: Performing Cilium status preflight check
19:43:29 STEP: Performing Cilium health check
19:43:38 STEP: Performing Cilium service preflight check
19:43:38 STEP: Performing K8s service preflight check
19:44:01 STEP: Performing Cilium status preflight check
19:44:01 STEP: Performing Cilium controllers preflight check
19:44:01 STEP: Performing Cilium health check
19:44:09 STEP: Performing Cilium service preflight check
19:44:09 STEP: Performing K8s service preflight check
19:44:33 STEP: Performing Cilium controllers preflight check
19:44:33 STEP: Performing Cilium health check
19:44:33 STEP: Performing Cilium status preflight check
19:44:41 STEP: Performing Cilium service preflight check
19:44:41 STEP: Performing K8s service preflight check
19:45:02 STEP: Cilium is not ready yet: connectivity health is failing: cilium-agent 'cilium-52cxb': connectivity to path 'k8s2.health-endpoint.primary-address.icmp.status' is unhealthy: 'Connection timed out'
19:45:02 STEP: Performing Cilium controllers preflight check
19:45:02 STEP: Performing Cilium status preflight check
19:45:02 STEP: Performing Cilium health check
19:45:08 STEP: Performing Cilium service preflight check
19:45:08 STEP: Performing K8s service preflight check
19:45:08 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-dmw75': controller cilium-health-ep is failing: Exitcode: 0
Stdout:
KVStore: Ok Disabled
Kubernetes: Ok 1.21 (v1.21.9) [linux/amd64]
Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Strict [enp0s16 192.168.59.15 fd17:625c:f037:2:a00:27ff:fe26:39f, enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11 (Direct Routing)]
Host firewall: Disabled
CNI Chaining: none
Cilium: Ok 1.11.90 (v1.11.90-daa801c)
NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 11/254 allocated from 10.0.1.0/24, IPv6: 11/254 allocated from fd02::100/120
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: BPF [enp0s16, enp0s3, enp0s8] 10.0.0.0/8 [IPv4: Enabled, IPv6: Disabled]
Controller Status: 66/67 healthy
Name Last success Last error Count Message
bpf-map-sync-cilium_lxc 4s ago never 0 no error
cilium-health-ep 2m53s ago 6s ago 1 Get "http://10.0.1.254:4240/hello": dial tcp 10.0.1.254:4240: connect: connection timed out
dns-garbage-collector-job 1m4s ago never 0 no error
endpoint-104-regeneration-recovery never never 0 no error
endpoint-1115-regeneration-recovery never never 0 no error
endpoint-1118-regeneration-recovery never never 0 no error
endpoint-1145-regeneration-recovery never never 0 no error
endpoint-1662-regeneration-recovery never never 0 no error
endpoint-2620-regeneration-recovery never never 0 no error
endpoint-2857-regeneration-recovery never never 0 no error
endpoint-338-regeneration-recovery never never 0 no error
endpoint-3648-regeneration-recovery never never 0 no error
endpoint-598-regeneration-recovery never never 0 no error
endpoint-629-regeneration-recovery never never 0 no error
endpoint-gc 4m5s ago never 0 no error
ipcache-inject-labels 4m0s ago 4m3s ago 0 no error
k8s-heartbeat 4s ago never 0 no error
link-cache 8s ago never 0 no error
mark-k8s-node-as-available 2m53s ago never 0 no error
metricsmap-bpf-prom-sync 9s ago never 0 no error
neighbor-table-refresh 23s ago never 0 no error
resolve-identity-629 6s ago never 0 no error
restoring-ep-identity (104) 2m54s ago never 0 no error
restoring-ep-identity (1115) 2m54s ago never 0 no error
restoring-ep-identity (1118) 2m54s ago never 0 no error
restoring-ep-identity (1145) 2m54s ago never 0 no error
restoring-ep-identity (1662) 2m54s ago never 0 no error
restoring-ep-identity (2620) 2m54s ago never 0 no error
restoring-ep-identity (2857) 2m54s ago never 0 no error
restoring-ep-identity (338) 2m54s ago never 0 no error
restoring-ep-identity (3648) 2m54s ago never 0 no error
restoring-ep-identity (598) 2m54s ago never 0 no error
sync-cnp-policy-status (v2 default/l7-policy) 4m3s ago never 0 no error
sync-endpoints-and-host-ips 54s ago never 0 no error
sync-lb-maps-with-k8s-services 2m54s ago never 0 no error
sync-policymap-104 26s ago never 0 no error
sync-policymap-1115 38s ago never 0 no error
sync-policymap-1118 34s ago never 0 no error
sync-policymap-1145 30s ago never 0 no error
sync-policymap-1662 38s ago never 0 no error
sync-policymap-2620 27s ago never 0 no error
sync-policymap-2857 34s ago never 0 no error
sync-policymap-338 30s ago never 0 no error
sync-policymap-3648 6s ago never 0 no error
sync-policymap-598 38s ago never 0 no error
sync-to-k8s-ciliumendpoint (104) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (1115) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (1118) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (1145) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (1662) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (2620) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (2857) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (338) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (3648) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (598) 4s ago never 0 no error
sync-to-k8s-ciliumendpoint (629) 6s ago never 0 no error
template-dir-watcher never never 0 no error
update-k8s-node-annotations 4m2s ago never 0 no error
waiting-initial-global-identities-ep (104) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (1115) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (1118) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (1145) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (1662) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (2620) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (2857) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (338) 2m54s ago never 0 no error
waiting-initial-global-identities-ep (598) 2m54s ago never 0 no error
Proxy Status: OK, ip 10.0.1.209, 2 redirects active on ports 10000-20000
Hubble: Ok Current/Max Flows: 1201/65535 (1.83%), Flows/s: 6.85 Metrics: Disabled
Encryption: Disabled
Cluster health: 0/2 reachable (2022-02-17T19:42:13Z)
Name IP Node Endpoints
k8s1 (localhost) 192.168.56.11 reachable unreachable
k8s2 192.168.56.12 reachable unreachable
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
19:45:08 STEP: Performing Cilium controllers preflight check
19:45:08 STEP: Performing Cilium status preflight check
19:45:08 STEP: Performing Cilium health check
19:45:14 STEP: Performing Cilium service preflight check
19:45:14 STEP: Performing K8s service preflight check
19:45:14 STEP: Waiting for cilium-operator to be ready
19:45:14 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
19:45:14 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
19:45:14 STEP: Validate that endpoints are ready before making any connection
19:45:17 STEP: Waiting for kube-dns to be ready
19:45:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
19:45:17 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
19:45:17 STEP: Running kube-dns preflight check
19:45:22 STEP: Performing K8s service preflight check
19:45:25 STEP: Making L7 requests between endpoints
19:45:26 STEP: No interrupts in migrated svc flows
19:45:28 STEP: Downgrading cilium to 1.11 image
19:45:33 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium")
19:47:04 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium") => <nil>
19:47:04 STEP: Checking that installed image is "v1.11"
19:47:04 STEP: Waiting for cilium-operator to be ready
19:47:04 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
19:47:05 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
19:47:05 STEP: Validate that endpoints are ready before making any connection
19:48:57 STEP: Waiting for kube-dns to be ready
19:48:57 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns")
19:48:57 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=kube-dns") => <nil>
19:48:57 STEP: Running kube-dns preflight check
19:49:03 STEP: Performing K8s service preflight check
19:49:06 STEP: Making L7 requests between endpoints
19:49:07 STEP: No interrupts in migrated svc flows
FAIL: migrate-svc restart count values do not match
Expected
<int>: 0
to be identical to
<int>: 2
=== Test Finished at 2022-02-17T19:49:08Z====
19:49:08 STEP: Running JustAfterEach block for EntireTestsuite K8sUpdates
===================== TEST FAILED =====================
19:49:10 STEP: Running AfterFailed block for EntireTestsuite K8sUpdates
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-5747bcc8f9-jzkkn 1/1 Running 0 16m 10.0.1.29 k8s1 <none> <none>
cilium-monitoring prometheus-655fb888d7-x9ll6 1/1 Running 0 16m 10.0.1.26 k8s1 <none> <none>
default app1-6bf9bf9bd5-5qgzl 2/2 Running 0 10m 10.0.1.214 k8s1 <none> <none>
default app1-6bf9bf9bd5-jcccm 2/2 Running 0 10m 10.0.1.88 k8s1 <none> <none>
default app2-58757b7dd5-x9lsp 1/1 Running 0 10m 10.0.1.116 k8s1 <none> <none>
default app3-5d69599cdd-sqlbn 1/1 Running 0 10m 10.0.1.173 k8s1 <none> <none>
default migrate-svc-client-4gbc7 1/1 Running 1 9m36s 10.0.0.3 k8s2 <none> <none>
default migrate-svc-client-dr865 0/1 Error 0 9m36s 10.0.1.123 k8s1 <none> <none>
default migrate-svc-client-ksdlk 1/1 Running 1 9m36s 10.0.1.121 k8s1 <none> <none>
default migrate-svc-client-l4g9t 1/1 Running 1 9m36s 10.0.0.86 k8s2 <none> <none>
default migrate-svc-client-rmzs2 1/1 Running 1 9m36s 10.0.0.156 k8s2 <none> <none>
default migrate-svc-server-tlqpc 1/1 Running 1 9m44s 10.0.0.37 k8s2 <none> <none>
default migrate-svc-server-vqq49 1/1 Running 1 9m44s 10.0.0.180 k8s2 <none> <none>
default migrate-svc-server-wm8hg 0/1 Error 0 9m44s 10.0.1.39 k8s1 <none> <none>
kube-system cilium-operator-c86cccb89-mgb4z 1/1 Running 0 3m57s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-c86cccb89-mlkpm 1/1 Running 0 3m57s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-qbk8r 1/1 Running 0 3m55s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-qhp7x 1/1 Running 0 3m55s 192.168.56.11 k8s1 <none> <none>
kube-system coredns-69b675786c-2xk47 1/1 Running 0 10m 10.0.0.134 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 25m 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 25m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 25m 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-mbp7m 1/1 Running 0 19m 192.168.56.12 k8s2 <none> <none>
kube-system kube-proxy-xjwlx 1/1 Running 0 25m 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 25m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-kmf76 1/1 Running 0 16m 192.168.56.12 k8s2 <none> <none>
kube-system log-gatherer-sggvn 1/1 Running 0 16m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-5mkhf 1/1 Running 0 19m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-l8c7s 1/1 Running 0 19m 192.168.56.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-qbk8r cilium-qhp7x]
cmd: kubectl exec -n kube-system cilium-qbk8r -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
153 Disabled Disabled 15110 k8s:app=migrate-svc-client fd02::c 10.0.0.156 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
286 Disabled Disabled 4 reserved:health fd02::56 10.0.0.153 ready
332 Disabled Disabled 15353 k8s:app=migrate-svc-server fd02::6b 10.0.0.180 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
1000 Disabled Disabled 15110 k8s:app=migrate-svc-client fd02::2c 10.0.0.3 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
2022 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
2942 Disabled Disabled 15353 k8s:app=migrate-svc-server fd02::80 10.0.0.37 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3173 Disabled Disabled 15110 k8s:app=migrate-svc-client fd02::30 10.0.0.86 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3236 Disabled Disabled 9510 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::26 10.0.0.134 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
Stderr:
cmd: kubectl exec -n kube-system cilium-qhp7x -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
104 Disabled Disabled 15353 k8s:app=migrate-svc-server fd02::1e6 10.0.1.39 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
338 Disabled Disabled 15110 k8s:app=migrate-svc-client fd02::17d 10.0.1.123 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
493 Disabled Disabled 4 reserved:health fd02::111 10.0.1.85 ready
598 Disabled Disabled 14498 k8s:app=prometheus fd02::163 10.0.1.26 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
1115 Disabled Disabled 13169 k8s:app=grafana fd02::1cc 10.0.1.29 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
1118 Disabled Disabled 41123 k8s:appSecond=true fd02::125 10.0.1.116 ready
k8s:id=app2
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1145 Disabled Disabled 56047 k8s:id=app3 fd02::10f 10.0.1.173 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1662 Enabled Disabled 16788 k8s:id=app1 fd02::162 10.0.1.214 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
2620 Enabled Disabled 16788 k8s:id=app1 fd02::1f4 10.0.1.88 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
2857 Disabled Disabled 15110 k8s:app=migrate-svc-client fd02::1e2 10.0.1.121 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=migrate-svc
3648 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/control-plane
k8s:node-role.kubernetes.io/master
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
Stderr:
===================== Exiting AfterFailed =====================
19:50:53 STEP: Running AfterEach for block EntireTestsuite K8sUpdates
19:51:12 STEP: Cleaning up Cilium components
19:51:28 STEP: Waiting for Cilium to become ready
19:51:51 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|8873b7af_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip]]
19:52:18 STEP: Running AfterAll block for EntireTestsuite K8sUpdates
19:52:18 STEP: Cleaning up Cilium components
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4//552/artifact/8873b7af_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4//552/artifact/test_results_Cilium-PR-K8s-1.21-kernel-5.4_552_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4/552/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!