Skip to content

v1.9: CI: K8sServicesTest Checks local redirect policy LRP connectivity: cilium pre-flight checks failed: JoinEP: Failed to load program: Failed to replace Qdisc for lxcXXXXXXXXXXXX: Link not found #16928

@joestringer

Description

@joestringer

Seen in v1.9 backports PR #16910.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.19/213/testReport/junit/Suite-k8s-1/18/K8sServicesTest_Checks_local_redirect_policy_LRP_connectivity/
3047b711_K8sServicesTest_Checks_local_redirect_policy_LRP_connectivity.zip

Test: Suite-k8s-1.18.K8sServicesTest Checks local redirect policy LRP connectivity

Summary

The core failure seems to be that there are pods that are removed while Cilium is down, then Cilium attempts to restore the endpoint after restart and the restore fails because the link for the pod is no longer available. We have seen similar behaviour in the past and were able to mitigate it, so it's possible that something has changed in the endpoint store / restore logic that brought this back.

Here are some sample logs:

2021-07-19T04:44:27.009240146Z level=warning msg="JoinEP: Failed to load program" containerID=5360c3e7f9 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=160 error="Failed to replace Qdisc for lxc314ae629618e: Link not found" file-path=160_next/bpf_lxc.o identity=41050 ipv4=10.0.0.125 ipv6="fd00::2c" k8sPodName=default/echo-748bf97b8f-7zrnj subsys=datapath-loader veth=lxc314ae629618e
2021-07-19T04:44:27.009267275Z level=error msg="Error while rewriting endpoint BPF program" containerID=5360c3e7f9 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=160 error="Failed to replace Qdisc for lxc314ae629618e: Link not found" identity=41050 ipv4=10.0.0.125 ipv6="fd00::2c" k8sPodName=default/echo-748bf97b8f-7zrnj subsys=endpoint
2021-07-19T04:44:27.013782637Z level=warning msg="generating BPF for endpoint failed, keeping stale directory." containerID=5360c3e7f9 datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=160 file-path=160_next_fail identity=41050 ipv4=10.0.0.125 ipv6="fd00::2c" k8sPodName=default/echo-748bf97b8f-7zrnj subsys=endpoint

Given that we have seen similar behaviour on other tests in the past, it's possible that this is unrelated to the content of the LRP tests.

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:461
cilium pre-flight checks failed
Expected
    <*errors.errorString | 0xc000713740>: {
        s: "Cilium validation failed: 4m0s timeout expired: Last polled error: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 \nStdout:\n \t KVStore:                Ok   Disabled\n\t Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]\n\t Kubernetes APIs:        [\"cilium/v2::CiliumClusterwideNetworkPolicy\", \"cilium/v2::CiliumEndpoint\", \"cilium/v2::CiliumLocalRedirectPolicy\", \"cilium/v2::CiliumNetworkPolicy\", \"cilium/v2::CiliumNode\", \"core/v1::Namespace\", \"core/v1::Node\", \"core/v1::Pods\", \"core/v1::Service\", \"discovery/v1beta1::EndpointSlice\", \"networking.k8s.io/v1::NetworkPolicy\"]\n\t KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]\n\t Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)\n\t NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory\n\t Cilium health daemon:   Ok   \n\t IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120\n\t BandwidthManager:       Disabled\n\t Host Routing:           Legacy\n\t Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8\n\t Controller Status:      26/27 healthy\n\t   Name                                          Last success   Last error   Count   Message\n\t   cilium-health-ep                              29s ago        never        0       no error                       \n\t   dns-garbage-collector-job                     13s ago        never        0       no error                       \n\t   endpoint-160-regeneration-recovery            never          10s ago      16      regeneration recovery failed   \n\t   endpoint-2455-regeneration-recovery           never          never        0       no error                       \n\t   endpoint-3206-regeneration-recovery           never          never        0       no error                       \n\t   endpoint-3506-regeneration-recovery           never          never        0       no error                       \n\t   k8s-heartbeat                                 13s ago        never        0       no error                       \n\t   mark-k8s-node-as-available                    4m30s ago      never        0       no error                       \n\t   metricsmap-bpf-prom-sync                      3s ago         never        0       no error                       \n\t   neighbor-table-refresh                        4m30s ago      never        0       no error                       \n\t   resolve-identity-2455                         4m29s ago      never        0       no error                       \n\t   restoring-ep-identity (160)                   4m31s ago      never        0       no error                       \n\t   restoring-ep-identity (3206)                  4m31s ago      never        0       no error                       \n\t   restoring-ep-identity (3506)                  4m31s ago      never        0       no error                       \n\t   sync-endpoints-and-host-ips                   31s ago        never        0       no error                       \n\t   sync-lb-maps-with-k8s-services                4m31s ago      never        0       no error                       \n\t   sync-policymap-2455                           10s ago        never        0       no error                       \n\t   sync-policymap-3206                           1m0s ago       never        0       no error                       \n\t   sync-policymap-3506                           11s ago        never        0       no error                       \n\t   sync-to-k8s-ciliumendpoint (160)              10s ago        4m1s ago     0       no error                       \n\t   sync-to-k8s-ciliumendpoint (2455)             9s ago         never        0       no error                       \n\t   sync-to-k8s-ciliumendpoint (3206)             1s ago         never        0       no error                       \n\t   sync-to-k8s-ciliumendpoint (3506)             1s ago         never        0       no error                       \n\t   template-dir-watcher                          never          never        0       no error                       \n\t   update-k8s-node-annotations                   5m11s ago      never        0       no error                       \n\t   waiting-initial-global-identities-ep (160)    4m31s ago      never        0       no error                       \n\t   waiting-initial-global-identities-ep (3506)   4m31s ago      never        0       no error                       \n\t Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000\n\t Hubble:           Ok              Current/Max Flows: 1994/4096 (48.68%), Flows/s: 7.40   Metrics: Disabled\n\t Cluster health:   2/2 reachable   (2021-07-19T04:48:42Z)\n\t \nStderr:\n \t \n",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/Services.go:466

Standard Output

Click to show
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 1
No errors/warnings found in logs
⚠️  Found "JoinEP: " in logs 34 times
Number of "context deadline exceeded" in logs: 2
⚠️  Number of "level=error" in logs: 70
⚠️  Number of "level=warning" in logs: 112
Number of "Cilium API handler panicked" in logs: 0
⚠️  Number of "Goroutine took lock for more than" in logs: 16
Top 5 errors/warnings:
Error while rewriting endpoint BPF program
endpoint regeneration failed
Cannot update CEP
Regeneration of endpoint failed
JoinEP: Failed to load program
Cilium pods: [cilium-m6s2p cilium-v2nvw]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                        Ingress   Egress
coredns-7964865f77-pctpv             
⚠️  Cilium agent 'cilium-m6s2p': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 1
Failed controllers:
 controller endpoint-160-regeneration-recovery failure 'regeneration recovery failed'
⚠️  Cilium agent 'cilium-v2nvw': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 22 Failed 1
Failed controllers:
 controller endpoint-160-regeneration-recovery failure 'regeneration recovery failed'
controller endpoint-326-regeneration-recovery failure 'regeneration recovery failed'

Standard Error

04:43:12 STEP: Running BeforeAll block for EntireTestsuite K8sServicesTest Checks local redirect policy
04:43:12 STEP: Installing Cilium
04:43:22 STEP: Waiting for Cilium to become ready
04:43:22 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:23 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:25 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:26 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:27 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:29 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:30 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:31 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:32 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:34 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:35 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:36 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:38 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:39 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:40 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:41 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:43 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:44 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:45 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:47 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:48 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:49 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:51 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:52 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:53 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:54 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:56 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:57 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:58 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:43:59 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:01 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:02 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:03 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:05 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:06 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:07 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:08 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:10 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:11 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:12 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:14 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:15 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:16 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:18 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:19 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:20 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:22 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:23 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:24 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:26 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:27 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:28 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:29 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:31 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:32 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:33 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:35 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:36 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:37 STEP: Cilium DaemonSet not ready yet: only 0 of 2 desired pods are ready
04:44:38 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
04:44:40 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
04:44:41 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
04:44:42 STEP: Cilium DaemonSet not ready yet: only 1 of 2 desired pods are ready
04:44:44 STEP: Number of ready Cilium pods: 2
04:44:44 STEP: Validating if Kubernetes DNS is deployed
04:44:44 STEP: Checking if deployment is ready
04:44:45 STEP: Checking if kube-dns service is plumbed correctly
04:44:45 STEP: Checking if DNS can resolve
04:44:45 STEP: Checking if pods have identity
04:44:49 STEP: Kubernetes DNS is up and operational
04:44:49 STEP: Validating Cilium Installation
04:44:49 STEP: Performing Cilium status preflight check
04:44:49 STEP: Performing Cilium health check
04:44:49 STEP: Performing Cilium controllers preflight check
04:44:57 STEP: Performing Cilium service preflight check
04:44:57 STEP: Performing K8s service preflight check
04:44:57 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              36s ago        never        0       no error                       
	   dns-garbage-collector-job                     20s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          6s ago       4       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 20s ago        never        0       no error                       
	   mark-k8s-node-as-available                    37s ago        never        0       no error                       
	   metricsmap-bpf-prom-sync                      5s ago         never        0       no error                       
	   neighbor-table-refresh                        37s ago        never        0       no error                       
	   resolve-identity-2455                         36s ago        never        0       no error                       
	   restoring-ep-identity (160)                   38s ago        never        0       no error                       
	   restoring-ep-identity (3206)                  38s ago        never        0       no error                       
	   restoring-ep-identity (3506)                  38s ago        never        0       no error                       
	   sync-endpoints-and-host-ips                   38s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                38s ago        never        0       no error                       
	   sync-policymap-2455                           18s ago        never        0       no error                       
	   sync-policymap-3206                           7s ago         never        0       no error                       
	   sync-policymap-3506                           18s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              7s ago         8s ago       0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             6s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             8s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             8s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m19s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    38s ago        never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   38s ago        never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 168/4096 (4.10%), Flows/s: 4.91   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:44:51Z)
	 
Stderr:
 	 

04:44:57 STEP: Performing Cilium status preflight check
04:44:57 STEP: Performing Cilium controllers preflight check
04:44:57 STEP: Performing Cilium health check
04:45:04 STEP: Performing Cilium service preflight check
04:45:04 STEP: Performing K8s service preflight check
04:45:04 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              44s ago        never        0       no error                       
	   dns-garbage-collector-job                     28s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          5s ago       5       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 28s ago        never        0       no error                       
	   mark-k8s-node-as-available                    45s ago        never        0       no error                       
	   metricsmap-bpf-prom-sync                      3s ago         never        0       no error                       
	   neighbor-table-refresh                        45s ago        never        0       no error                       
	   resolve-identity-2455                         44s ago        never        0       no error                       
	   restoring-ep-identity (160)                   46s ago        never        0       no error                       
	   restoring-ep-identity (3206)                  46s ago        never        0       no error                       
	   restoring-ep-identity (3506)                  46s ago        never        0       no error                       
	   sync-endpoints-and-host-ips                   46s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                46s ago        never        0       no error                       
	   sync-policymap-2455                           25s ago        never        0       no error                       
	   sync-policymap-3206                           15s ago        never        0       no error                       
	   sync-policymap-3506                           26s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              5s ago         16s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             4s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             6s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             6s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m26s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    46s ago        never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   46s ago        never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 217/4096 (5.30%), Flows/s: 4.91   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:44:59Z)
	 
Stderr:
 	 

04:45:04 STEP: Performing Cilium controllers preflight check
04:45:04 STEP: Performing Cilium status preflight check
04:45:04 STEP: Performing Cilium health check
04:45:11 STEP: Performing Cilium service preflight check
04:45:11 STEP: Performing K8s service preflight check
04:45:11 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              52s ago        never        0       no error                       
	   dns-garbage-collector-job                     35s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          13s ago      5       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 6s ago         never        0       no error                       
	   mark-k8s-node-as-available                    52s ago        never        0       no error                       
	   metricsmap-bpf-prom-sync                      5s ago         never        0       no error                       
	   neighbor-table-refresh                        52s ago        never        0       no error                       
	   resolve-identity-2455                         52s ago        never        0       no error                       
	   restoring-ep-identity (160)                   53s ago        never        0       no error                       
	   restoring-ep-identity (3206)                  53s ago        never        0       no error                       
	   restoring-ep-identity (3506)                  53s ago        never        0       no error                       
	   sync-endpoints-and-host-ips                   54s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                53s ago        never        0       no error                       
	   sync-policymap-2455                           33s ago        never        0       no error                       
	   sync-policymap-3206                           23s ago        never        0       no error                       
	   sync-policymap-3506                           33s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              12s ago        23s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             12s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             3s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             3s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m34s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    53s ago        never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   53s ago        never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 243/4096 (5.93%), Flows/s: 4.94   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:45:07Z)
	 
Stderr:
 	 

04:45:11 STEP: Performing Cilium status preflight check
04:45:11 STEP: Performing Cilium health check
04:45:11 STEP: Performing Cilium controllers preflight check
04:45:19 STEP: Performing Cilium service preflight check
04:45:19 STEP: Performing K8s service preflight check
04:45:19 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              1m0s ago       never        0       no error                       
	   dns-garbage-collector-job                     43s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          11s ago      6       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 13s ago        never        0       no error                       
	   mark-k8s-node-as-available                    1m0s ago       never        0       no error                       
	   metricsmap-bpf-prom-sync                      3s ago         never        0       no error                       
	   neighbor-table-refresh                        1m0s ago       never        0       no error                       
	   resolve-identity-2455                         1m0s ago       never        0       no error                       
	   restoring-ep-identity (160)                   1m1s ago       never        0       no error                       
	   restoring-ep-identity (3206)                  1m1s ago       never        0       no error                       
	   restoring-ep-identity (3506)                  1m1s ago       never        0       no error                       
	   sync-endpoints-and-host-ips                   1s ago         never        0       no error                       
	   sync-lb-maps-with-k8s-services                1m1s ago       never        0       no error                       
	   sync-policymap-2455                           41s ago        never        0       no error                       
	   sync-policymap-3206                           31s ago        never        0       no error                       
	   sync-policymap-3506                           41s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              10s ago        31s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             10s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             1s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             1s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m42s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    1m1s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   1m1s ago       never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 293/4096 (7.15%), Flows/s: 4.95   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:45:14Z)
	 
Stderr:
 	 

04:45:19 STEP: Performing Cilium controllers preflight check
04:45:19 STEP: Performing Cilium status preflight check
04:45:19 STEP: Performing Cilium health check
04:45:27 STEP: Performing Cilium service preflight check
04:45:27 STEP: Performing K8s service preflight check
04:45:27 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              7s ago         never        0       no error                       
	   dns-garbage-collector-job                     51s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          6s ago       7       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 21s ago        never        0       no error                       
	   mark-k8s-node-as-available                    1m7s ago       never        0       no error                       
	   metricsmap-bpf-prom-sync                      6s ago         never        0       no error                       
	   neighbor-table-refresh                        1m7s ago       never        0       no error                       
	   resolve-identity-2455                         1m7s ago       never        0       no error                       
	   restoring-ep-identity (160)                   1m8s ago       never        0       no error                       
	   restoring-ep-identity (3206)                  1m8s ago       never        0       no error                       
	   restoring-ep-identity (3506)                  1m8s ago       never        0       no error                       
	   sync-endpoints-and-host-ips                   9s ago         never        0       no error                       
	   sync-lb-maps-with-k8s-services                1m8s ago       never        0       no error                       
	   sync-policymap-2455                           48s ago        never        0       no error                       
	   sync-policymap-3206                           38s ago        never        0       no error                       
	   sync-policymap-3506                           48s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              7s ago         38s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             7s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             8s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             8s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m49s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    1m8s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   1m8s ago       never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 319/4096 (7.79%), Flows/s: 4.97   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:45:22Z)
	 
Stderr:
 	 

04:45:27 STEP: Performing Cilium controllers preflight check
04:45:27 STEP: Performing Cilium health check
04:45:27 STEP: Performing Cilium status preflight check
04:45:30 STEP: Performing Cilium service preflight check
04:45:30 STEP: Performing K8s service preflight check
04:45:31 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              13s ago        never        0       no error                       
	   dns-garbage-collector-job                     57s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          12s ago      7       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 27s ago        never        0       no error                       
	   mark-k8s-node-as-available                    1m14s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      2s ago         never        0       no error                       
	   neighbor-table-refresh                        1m14s ago      never        0       no error                       
	   resolve-identity-2455                         1m13s ago      never        0       no error                       
	   restoring-ep-identity (160)                   1m15s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  1m15s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  1m15s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   15s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                1m15s ago      never        0       no error                       
	   sync-policymap-2455                           54s ago        never        0       no error                       
	   sync-policymap-3206                           44s ago        never        0       no error                       
	   sync-policymap-3506                           55s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              4s ago         45s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             3s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             5s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             5s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m55s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    1m15s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   1m15s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 368/4096 (8.98%), Flows/s: 4.96   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:45:22Z)
	 
Stderr:
 	 

04:45:31 STEP: Performing Cilium controllers preflight check
04:45:31 STEP: Performing Cilium health check
04:45:31 STEP: Performing Cilium status preflight check
04:45:32 STEP: Performing Cilium service preflight check
04:45:32 STEP: Performing K8s service preflight check
04:45:33 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              16s ago        never        0       no error                       
	   dns-garbage-collector-job                     1m0s ago       never        0       no error                       
	   endpoint-160-regeneration-recovery            never          15s ago      7       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 30s ago        never        0       no error                       
	   mark-k8s-node-as-available                    1m17s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      5s ago         never        0       no error                       
	   neighbor-table-refresh                        1m17s ago      never        0       no error                       
	   resolve-identity-2455                         1m16s ago      never        0       no error                       
	   restoring-ep-identity (160)                   1m18s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  1m18s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  1m18s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   18s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                1m18s ago      never        0       no error                       
	   sync-policymap-2455                           57s ago        never        0       no error                       
	   sync-policymap-3206                           47s ago        never        0       no error                       
	   sync-policymap-3506                           57s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              6s ago         47s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             6s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             8s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             7s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   1m58s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    1m18s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   1m18s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 368/4096 (8.98%), Flows/s: 4.96   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:45:30Z)
	 
Stderr:
 	 

04:45:33 STEP: Performing Cilium controllers preflight check
04:45:33 STEP: Performing Cilium status preflight check
04:45:33 STEP: Performing Cilium health check
04:45:36 STEP: Performing Cilium service preflight check
04:45:36 STEP: Performing K8s service preflight check
04:45:37 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              19s ago        never        0       no error                       
	   dns-garbage-collector-job                     2s ago         never        0       no error                       
	   endpoint-160-regeneration-recovery            never          4s ago       8       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 2s ago         never        0       no error                       
	   mark-k8s-node-as-available                    1m19s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      2s ago         never        0       no error                       
	   neighbor-table-refresh                        1m19s ago      never        0       no error                       
	   resolve-identity-2455                         1m19s ago      never        0       no error                       
	   restoring-ep-identity (160)                   1m20s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  1m20s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  1m20s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   20s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                1m20s ago      never        0       no error                       
	   sync-policymap-2455                           1m0s ago       never        0       no error                       
	   sync-policymap-3206                           50s ago        never        0       no error                       
	   sync-policymap-3506                           0s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              9s ago         50s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             9s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             0s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             0s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   2m1s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (160)    1m20s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   1m20s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 409/4096 (9.99%), Flows/s: 5.16   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:45:32Z)
	 
Stderr:
 	 

04:45:37 STEP: Performing Cilium controllers preflight check
04:45:37 STEP: Performing Cilium health check
04:45:37 STEP: Performing Cilium status preflight check
04:45:40 STEP: Performing Cilium service preflight check
04:45:40 STEP: Performing K8s service preflight check
04:45:41 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              22s ago        never        0       no error                       
	   dns-garbage-collector-job                     6s ago         never        0       no error                       
	   endpoint-160-regeneration-recovery            never          7s ago       8       regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 6s ago         never        0       no error                       
	   mark-k8s-node-as-available                    1m23s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      6s ago         never        0       no error                       
	   neighbor-table-refresh                        1m23s ago      never        0       no error                       
	   resolve-identity-2455                         1m22s ago      never        0       no error                       
	   restoring-ep-identity (160)                   1m24s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  1m24s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  1m24s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   24s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                1m24s ago      never        0       no error                       
	   sync-policymap-2455                           1m3s ago       never        0       no error                       
	   sync-policymap-3206                           53s ago        never        0       no error                       
	   sync-policymap-3506                           4s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              13s ago        54s ago      0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             12s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             4s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             4s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   2m4s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (160)    1m24s ago      never        0       no error                  
...[truncated 285429 chars]...
        0       no error                       
	   mark-k8s-node-as-available                    4m12s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      5s ago         never        0       no error                       
	   neighbor-table-refresh                        4m12s ago      never        0       no error                       
	   resolve-identity-2455                         4m11s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m13s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m13s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m13s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   13s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m13s ago      never        0       no error                       
	   sync-policymap-2455                           53s ago        never        0       no error                       
	   sync-policymap-3206                           43s ago        never        0       no error                       
	   sync-policymap-3506                           53s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              12s ago        3m43s ago    0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             11s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             3s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             3s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   4m54s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m13s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m13s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1832/4096 (44.73%), Flows/s: 7.35   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:24Z)
	 
Stderr:
 	 

04:48:29 STEP: Performing Cilium controllers preflight check
04:48:29 STEP: Performing Cilium health check
04:48:29 STEP: Performing Cilium status preflight check
04:48:31 STEP: Performing Cilium service preflight check
04:48:31 STEP: Performing K8s service preflight check
04:48:32 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              15s ago        never        0       no error                       
	   dns-garbage-collector-job                     58s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          26s ago      15      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 28s ago        never        0       no error                       
	   mark-k8s-node-as-available                    4m15s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      3s ago         never        0       no error                       
	   neighbor-table-refresh                        4m15s ago      never        0       no error                       
	   resolve-identity-2455                         4m15s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m16s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m16s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m16s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   16s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m16s ago      never        0       no error                       
	   sync-policymap-2455                           56s ago        never        0       no error                       
	   sync-policymap-3206                           46s ago        never        0       no error                       
	   sync-policymap-3506                           56s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              5s ago         3m46s ago    0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             5s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             6s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             6s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   4m57s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m16s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m16s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1868/4096 (45.61%), Flows/s: 7.35   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:28Z)
	 
Stderr:
 	 

04:48:32 STEP: Performing Cilium controllers preflight check
04:48:32 STEP: Performing Cilium status preflight check
04:48:32 STEP: Performing Cilium health check
04:48:34 STEP: Performing Cilium service preflight check
04:48:34 STEP: Performing K8s service preflight check
04:48:35 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              18s ago        never        0       no error                       
	   dns-garbage-collector-job                     1m1s ago       never        0       no error                       
	   endpoint-160-regeneration-recovery            never          29s ago      15      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 31s ago        never        0       no error                       
	   mark-k8s-node-as-available                    4m18s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      6s ago         never        0       no error                       
	   neighbor-table-refresh                        4m18s ago      never        0       no error                       
	   resolve-identity-2455                         4m18s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m19s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m19s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m19s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   19s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m19s ago      never        0       no error                       
	   sync-policymap-2455                           59s ago        never        0       no error                       
	   sync-policymap-3206                           49s ago        never        0       no error                       
	   sync-policymap-3506                           59s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              8s ago         3m49s ago    0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             8s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             9s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             9s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   5m0s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m19s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m19s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1868/4096 (45.61%), Flows/s: 7.35   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:31Z)
	 
Stderr:
 	 

04:48:35 STEP: Performing Cilium controllers preflight check
04:48:35 STEP: Performing Cilium health check
04:48:35 STEP: Performing Cilium status preflight check
04:48:37 STEP: Performing Cilium service preflight check
04:48:37 STEP: Performing K8s service preflight check
04:48:38 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              20s ago        never        0       no error                       
	   dns-garbage-collector-job                     4s ago         never        0       no error                       
	   endpoint-160-regeneration-recovery            never          31s ago      15      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 4s ago         never        0       no error                       
	   mark-k8s-node-as-available                    4m21s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      4s ago         never        0       no error                       
	   neighbor-table-refresh                        4m21s ago      never        0       no error                       
	   resolve-identity-2455                         4m20s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m22s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m22s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m22s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   22s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m22s ago      never        0       no error                       
	   sync-policymap-2455                           1m1s ago       never        0       no error                       
	   sync-policymap-3206                           51s ago        never        0       no error                       
	   sync-policymap-3506                           2s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              11s ago        3m52s ago    0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             10s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             2s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             2s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   5m2s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m22s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m22s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1907/4096 (46.56%), Flows/s: 7.35   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:34Z)
	 
Stderr:
 	 

04:48:38 STEP: Performing Cilium controllers preflight check
04:48:38 STEP: Performing Cilium health check
04:48:38 STEP: Performing Cilium status preflight check
04:48:39 STEP: Performing Cilium service preflight check
04:48:39 STEP: Performing K8s service preflight check
04:48:41 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              23s ago        never        0       no error                       
	   dns-garbage-collector-job                     6s ago         never        0       no error                       
	   endpoint-160-regeneration-recovery            never          34s ago      15      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 7s ago         never        0       no error                       
	   mark-k8s-node-as-available                    4m23s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      6s ago         never        0       no error                       
	   neighbor-table-refresh                        4m23s ago      never        0       no error                       
	   resolve-identity-2455                         4m23s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m24s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m24s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m24s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   25s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m24s ago      never        0       no error                       
	   sync-policymap-2455                           1m4s ago       never        0       no error                       
	   sync-policymap-3206                           54s ago        never        0       no error                       
	   sync-policymap-3506                           4s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              13s ago        3m54s ago    0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             13s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             4s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             4s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   5m5s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m24s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m24s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1907/4096 (46.56%), Flows/s: 7.35   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:37Z)
	 
Stderr:
 	 

04:48:41 STEP: Performing Cilium controllers preflight check
04:48:41 STEP: Performing Cilium health check
04:48:41 STEP: Performing Cilium status preflight check
04:48:43 STEP: Performing Cilium service preflight check
04:48:43 STEP: Performing K8s service preflight check
04:48:44 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              26s ago        never        0       no error                       
	   dns-garbage-collector-job                     9s ago         never        0       no error                       
	   endpoint-160-regeneration-recovery            never          7s ago       16      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 9s ago         never        0       no error                       
	   mark-k8s-node-as-available                    4m26s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      4s ago         never        0       no error                       
	   neighbor-table-refresh                        4m26s ago      never        0       no error                       
	   resolve-identity-2455                         4m26s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m27s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m27s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m27s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   27s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m27s ago      never        0       no error                       
	   sync-policymap-2455                           7s ago         never        0       no error                       
	   sync-policymap-3206                           57s ago        never        0       no error                       
	   sync-policymap-3506                           7s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              6s ago         3m57s ago    0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             6s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             7s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             7s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   5m8s ago       never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m27s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m27s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1948/4096 (47.56%), Flows/s: 7.37   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:39Z)
	 
Stderr:
 	 

04:48:44 STEP: Performing Cilium controllers preflight check
04:48:44 STEP: Performing Cilium status preflight check
04:48:44 STEP: Performing Cilium health check
04:48:46 STEP: Performing Cilium service preflight check
04:48:46 STEP: Performing K8s service preflight check
04:48:47 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              29s ago        never        0       no error                       
	   dns-garbage-collector-job                     13s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          10s ago      16      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 13s ago        never        0       no error                       
	   mark-k8s-node-as-available                    4m30s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      3s ago         never        0       no error                       
	   neighbor-table-refresh                        4m30s ago      never        0       no error                       
	   resolve-identity-2455                         4m29s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m31s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m31s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m31s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   31s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m31s ago      never        0       no error                       
	   sync-policymap-2455                           10s ago        never        0       no error                       
	   sync-policymap-3206                           1m0s ago       never        0       no error                       
	   sync-policymap-3506                           11s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              10s ago        4m1s ago     0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             9s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             1s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             1s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   5m11s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m31s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m31s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1994/4096 (48.68%), Flows/s: 7.40   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:42Z)
	 
Stderr:
 	 

04:48:47 STEP: Performing Cilium controllers preflight check
04:48:47 STEP: Performing Cilium status preflight check
04:48:47 STEP: Performing Cilium health check
04:48:48 STEP: Performing Cilium service preflight check
04:48:48 STEP: Performing K8s service preflight check
FAIL: cilium pre-flight checks failed
Expected
    <*errors.errorString | 0xc000713740>: {
        s: "Cilium validation failed: 4m0s timeout expired: Last polled error: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 \nStdout:\n \t KVStore:                Ok   Disabled\n\t Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]\n\t Kubernetes APIs:        [\"cilium/v2::CiliumClusterwideNetworkPolicy\", \"cilium/v2::CiliumEndpoint\", \"cilium/v2::CiliumLocalRedirectPolicy\", \"cilium/v2::CiliumNetworkPolicy\", \"cilium/v2::CiliumNode\", \"core/v1::Namespace\", \"core/v1::Node\", \"core/v1::Pods\", \"core/v1::Service\", \"discovery/v1beta1::EndpointSlice\", \"networking.k8s.io/v1::NetworkPolicy\"]\n\t KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]\n\t Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)\n\t NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory\n\t Cilium health daemon:   Ok   \n\t IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120\n\t BandwidthManager:       Disabled\n\t Host Routing:           Legacy\n\t Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8\n\t Controller Status:      26/27 healthy\n\t   Name                                          Last success   Last error   Count   Message\n\t   cilium-health-ep                              29s ago        never        0       no error                       \n\t   dns-garbage-collector-job                     13s ago        never        0       no error                       \n\t   endpoint-160-regeneration-recovery            never          10s ago      16      regeneration recovery failed   \n\t   endpoint-2455-regeneration-recovery           never          never        0       no error                       \n\t   endpoint-3206-regeneration-recovery           never          never        0       no error                       \n\t   endpoint-3506-regeneration-recovery           never          never        0       no error                       \n\t   k8s-heartbeat                                 13s ago        never        0       no error                       \n\t   mark-k8s-node-as-available                    4m30s ago      never        0       no error                       \n\t   metricsmap-bpf-prom-sync                      3s ago         never        0       no error                       \n\t   neighbor-table-refresh                        4m30s ago      never        0       no error                       \n\t   resolve-identity-2455                         4m29s ago      never        0       no error                       \n\t   restoring-ep-identity (160)                   4m31s ago      never        0       no error                       \n\t   restoring-ep-identity (3206)                  4m31s ago      never        0       no error                       \n\t   restoring-ep-identity (3506)                  4m31s ago      never        0       no error                       \n\t   sync-endpoints-and-host-ips                   31s ago        never        0       no error                       \n\t   sync-lb-maps-with-k8s-services                4m31s ago      never        0       no error                       \n\t   sync-policymap-2455                           10s ago        never        0       no error                       \n\t   sync-policymap-3206                           1m0s ago       never        0       no error                       \n\t   sync-policymap-3506                           11s ago        never        0       no error                       \n\t   sync-to-k8s-ciliumendpoint (160)              10s ago        4m1s ago     0       no error                       \n\t   sync-to-k8s-ciliumendpoint (2455)             9s ago         never        0       no error                       \n\t   sync-to-k8s-ciliumendpoint (3206)             1s ago         never        0       no error                       \n\t   sync-to-k8s-ciliumendpoint (3506)             1s ago         never        0       no error                       \n\t   template-dir-watcher                          never          never        0       no error                       \n\t   update-k8s-node-annotations                   5m11s ago      never        0       no error                       \n\t   waiting-initial-global-identities-ep (160)    4m31s ago      never        0       no error                       \n\t   waiting-initial-global-identities-ep (3506)   4m31s ago      never        0       no error                       \n\t Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000\n\t Hubble:           Ok              Current/Max Flows: 1994/4096 (48.68%), Flows/s: 7.40   Metrics: Disabled\n\t Cluster health:   2/2 reachable   (2021-07-19T04:48:42Z)\n\t \nStderr:\n \t \n",
    }
to be nil
04:48:49 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
04:48:50 STEP: Cilium is not ready yet: controllers are failing: cilium-agent 'cilium-m6s2p': controller endpoint-160-regeneration-recovery is failing: Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.18 (v1.18.20) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumLocalRedirectPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s8 (Direct Routing), enp0s3]
	 Cilium:                 Ok   1.9.8 (v1.9.8-770f20068)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/255 allocated from 10.0.0.0/24, IPv6: 4/255 allocated from fd00::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s8, enp0s3]   10.0.0.0/8
	 Controller Status:      26/27 healthy
	   Name                                          Last success   Last error   Count   Message
	   cilium-health-ep                              32s ago        never        0       no error                       
	   dns-garbage-collector-job                     16s ago        never        0       no error                       
	   endpoint-160-regeneration-recovery            never          13s ago      16      regeneration recovery failed   
	   endpoint-2455-regeneration-recovery           never          never        0       no error                       
	   endpoint-3206-regeneration-recovery           never          never        0       no error                       
	   endpoint-3506-regeneration-recovery           never          never        0       no error                       
	   k8s-heartbeat                                 16s ago        never        0       no error                       
	   mark-k8s-node-as-available                    4m33s ago      never        0       no error                       
	   metricsmap-bpf-prom-sync                      6s ago         never        0       no error                       
	   neighbor-table-refresh                        4m33s ago      never        0       no error                       
	   resolve-identity-2455                         4m32s ago      never        0       no error                       
	   restoring-ep-identity (160)                   4m34s ago      never        0       no error                       
	   restoring-ep-identity (3206)                  4m34s ago      never        0       no error                       
	   restoring-ep-identity (3506)                  4m34s ago      never        0       no error                       
	   sync-endpoints-and-host-ips                   34s ago        never        0       no error                       
	   sync-lb-maps-with-k8s-services                4m34s ago      never        0       no error                       
	   sync-policymap-2455                           13s ago        never        0       no error                       
	   sync-policymap-3206                           1m3s ago       never        0       no error                       
	   sync-policymap-3506                           13s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (160)              12s ago        4m3s ago     0       no error                       
	   sync-to-k8s-ciliumendpoint (2455)             12s ago        never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3206)             4s ago         never        0       no error                       
	   sync-to-k8s-ciliumendpoint (3506)             3s ago         never        0       no error                       
	   template-dir-watcher                          never          never        0       no error                       
	   update-k8s-node-annotations                   5m14s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (160)    4m34s ago      never        0       no error                       
	   waiting-initial-global-identities-ep (3506)   4m34s ago      never        0       no error                       
	 Proxy Status:     OK, ip 10.0.0.230, 0 redirects active on ports 10000-20000
	 Hubble:           Ok              Current/Max Flows: 1994/4096 (48.68%), Flows/s: 7.40   Metrics: Disabled
	 Cluster health:   2/2 reachable   (2021-07-19T04:48:46Z)
	 
Stderr:
 	 

FAIL: Found 1 k8s-app=cilium logs matching list of errors that must be investigated:
JoinEP: 
===================== TEST FAILED =====================
04:48:50 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-54dbdc987-ltbr7            0/1     Running   0          53m     10.0.0.119      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-6ff848df8b-jjsbc        1/1     Running   0          53m     10.0.0.243      k8s1   <none>           <none>
	 kube-system         cilium-m6s2p                       1/1     Running   0          5m32s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6bbb776b6d-npb5f   1/1     Running   0          5m31s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6bbb776b6d-tqs5j   1/1     Running   0          5m31s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-v2nvw                       1/1     Running   0          5m32s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         coredns-7964865f77-pctpv           1/1     Running   0          8m32s   10.0.0.148      k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running   0          63m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running   0          63m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running   0          63m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running   0          63m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-csddr                 1/1     Running   0          53m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-zvnwl                 1/1     Running   0          53m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-8xn69               1/1     Running   0          54m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-w2qjq               1/1     Running   0          54m     192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-m6s2p cilium-v2nvw]
cmd: kubectl exec -n kube-system cilium-m6s2p -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.108.13.232:3000   ClusterIP                                
	 2    10.111.39.104:9090   ClusterIP      1 => 10.0.0.243:9090      
	 3    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 4    10.96.0.10:53        ClusterIP      1 => 10.0.0.148:53        
	 5    10.96.0.10:9153      ClusterIP      1 => 10.0.0.148:9153      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-m6s2p -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                            
	 160        Disabled           Disabled          41050      k8s:io.cilium.k8s.policy.cluster=default          fd00::2c   10.0.0.125   not-ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                       
	                                                            k8s:io.kubernetes.pod.namespace=default                                               
	                                                            k8s:name=echo                                                                         
	 2455       Disabled           Disabled          4          reserved:health                                   fd00::3e   10.0.0.41    ready       
	 3206       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                ready       
	                                                            reserved:host                                                                         
	 3506       Disabled           Disabled          21111      k8s:io.cilium.k8s.policy.cluster=default          fd00::b8   10.0.0.148   ready       
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                       
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                           
	                                                            k8s:k8s-app=kube-dns                                                                  
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-v2nvw -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                   
	 1    10.108.13.232:3000   ClusterIP                                
	 2    10.96.0.10:53        ClusterIP      1 => 10.0.0.148:53        
	 3    10.96.0.10:9153      ClusterIP      1 => 10.0.0.148:9153      
	 4    10.111.39.104:9090   ClusterIP      1 => 10.0.0.243:9090      
	 5    10.96.0.1:443        ClusterIP      1 => 192.168.36.11:6443   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-v2nvw -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                       IPv6        IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                            
	 326        Disabled           Disabled          41050      k8s:io.cilium.k8s.policy.cluster=default          fd00::1e5   10.0.1.93   not-ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                       
	                                                            k8s:io.kubernetes.pod.namespace=default                                               
	                                                            k8s:name=echo                                                                         
	 1024       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                ready       
	                                                            k8s:node-role.kubernetes.io/master                                                    
	                                                            reserved:host                                                                         
	 3715       Disabled           Disabled          4          reserved:health                                   fd00::11f   10.0.1.99   ready       
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
04:49:08 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
04:49:09 STEP: Running AfterEach for block EntireTestsuite

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakearea/datapathImpacts bpf/ or low-level forwarding details, including map management and monitor messages.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions