Skip to content

Rancher 2 on Raspberry Pi 4 with RancherOS ARM64 fails to run #21534

@onedr0p

Description

@onedr0p

What kind of request is this (question/bug/enhancement/feature request):
bug

Steps to reproduce (least amount of steps as possible):

  • Raspberry Pi 4 (4GB) model
  • Flash RancherOS 1.5.3 (rancheros-raspberry-pi64.zip)
  • Run Rancher 2 with Docker command:
docker run -d --privileged --restart=unless-stopped \
  -p 80:80 -p 443:443 \
rancher/rancher:v2.2.5-linux-arm64

Has anyone been able to deploy Rancher 2 on ARM64, specifically Raspberry Pis?

Result:

[FATAL] error running the jail command: cp: cannot stat '/lib64': No such file or directory
: exit status 1

I can confirm rancher/rancher:v2.2.5-linux-arm64 has no /lib64 directory inside the container.

Full output from docker logs:

2019/07/15 22:33:33 [INFO] Rancher version v2.2.5 is starting
2019/07/15 22:33:33 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}
2019/07/15 22:33:33 [INFO] Listening on /tmp/log.sock
2019/07/15 22:33:33 [INFO] Running etcd --data-dir=management-state/etcd
2019-07-15 22:33:33.049761 W | etcdmain: running etcd on unsupported architecture "arm64" since ETCD_UNSUPPORTED_ARCH is set
2019-07-15 22:33:33.054484 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=arm64
2019-07-15 22:33:33.055177 I | etcdmain: etcd Version: 3.2.13
2019-07-15 22:33:33.055618 I | etcdmain: Git SHA: Not provided (use ./build instead of go build)
2019-07-15 22:33:33.055751 I | etcdmain: Go Version: go1.11
2019-07-15 22:33:33.055845 I | etcdmain: Go OS/Arch: linux/arm64
2019-07-15 22:33:33.056597 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2019-07-15 22:33:33.057250 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-15 22:33:33.058829 I | embed: listening for peers on http://localhost:2380
2019-07-15 22:33:33.059986 I | embed: listening for client requests on localhost:2379
2019-07-15 22:33:33.074439 I | etcdserver: name = default
2019-07-15 22:33:33.074604 I | etcdserver: data dir = management-state/etcd
2019-07-15 22:33:33.074711 I | etcdserver: member dir = management-state/etcd/member
2019-07-15 22:33:33.074800 I | etcdserver: heartbeat = 100ms
2019-07-15 22:33:33.074910 I | etcdserver: election = 1000ms
2019-07-15 22:33:33.075001 I | etcdserver: snapshot count = 100000
2019-07-15 22:33:33.075215 I | etcdserver: advertise client URLs = http://localhost:2379
2019-07-15 22:33:33.093126 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 524
2019-07-15 22:33:33.093605 I | raft: 8e9e05c52164694d became follower at term 17
2019-07-15 22:33:33.093882 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 17, commit: 524, applied: 0, lastindex: 524, lastterm: 17]
2019-07-15 22:33:33.109352 W | auth: simple token is not cryptographically signed
2019-07-15 22:33:33.114353 I | etcdserver: starting server... [version: 3.2.13, cluster version: to_be_decided]
2019-07-15 22:33:33.118197 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2019-07-15 22:33:33.119008 N | etcdserver/membership: set the initial cluster version to 3.2
2019-07-15 22:33:33.119375 I | etcdserver/api: enabled capabilities for version 3.2
2019-07-15 22:33:34.095212 I | raft: 8e9e05c52164694d is starting a new election at term 17
2019-07-15 22:33:34.095591 I | raft: 8e9e05c52164694d became candidate at term 18
2019-07-15 22:33:34.095706 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 18
2019-07-15 22:33:34.095805 I | raft: 8e9e05c52164694d became leader at term 18
2019-07-15 22:33:34.095875 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 18
2019-07-15 22:33:34.097203 I | embed: ready to serve client requests
2019-07-15 22:33:34.097918 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2019-07-15 22:33:34.098759 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
I0715 22:33:34.103493       8 server.go:525] external host was not specified, using 127.0.0.1
I0715 22:33:37.973413       8 http.go:110] HTTP2 has been explicitly disabled
I0715 22:33:37.983143       8 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,ServiceAccount,DefaultStorageClass.
I0715 22:33:37.983255       8 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ServiceAccount.
I0715 22:33:37.987046       8 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,ServiceAccount,DefaultStorageClass.
I0715 22:33:37.987195       8 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ServiceAccount.
I0715 22:33:38.036479       8 master.go:215] Using reconciler: lease
W0715 22:33:38.655525       8 genericapiserver.go:306] Skipping API batch/v2alpha1 because it has no resources.
I0715 22:33:38.931169       8 secure_serving.go:116] Serving securely on 127.0.0.1:6443
I0715 22:33:38.945662       8 crd_finalizer.go:242] Starting CRDFinalizer
I0715 22:33:38.953982       8 http.go:110] HTTP2 has been explicitly disabled
I0715 22:33:38.975708       8 customresource_discovery_controller.go:199] Starting DiscoveryController
I0715 22:33:38.976961       8 naming_controller.go:284] Starting NamingConditionController
I0715 22:33:38.977674       8 establishing_controller.go:73] Starting EstablishingController
I0715 22:33:38.995576       8 controllermanager.go:135] Version: v1.12.2-lite5
I0715 22:33:39.000708       8 deprecated_insecure_serving.go:50] Serving insecurely on [::]:10252
I0715 22:33:39.056375       8 server.go:128] Version: v1.12.2-lite5
W0715 22:33:39.074936       8 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0715 22:33:39.085032       8 authorization.go:47] Authorization is disabled
W0715 22:33:39.085874       8 authentication.go:55] Authentication is disabled
I0715 22:33:39.086440       8 deprecated_insecure_serving.go:48] Serving healthz insecurely on [::]:10251
2019/07/15 22:33:39 [INFO] Running in single server mode, will not peer connections
I0715 22:33:39.971510       8 storage_scheduling.go:96] all system priority classes are created successfully or already exist.
I0715 22:33:40.566031       8 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0715 22:33:40.666610       8 controller_utils.go:1034] Caches are synced for scheduler controller
I0715 22:33:40.666943       8 leaderelection.go:187] attempting to acquire leader lease  kube-system/kube-scheduler...
I0715 22:33:42.147318       8 controllermanager.go:455] Started "daemonset"
I0715 22:33:42.148798       8 daemon_controller.go:270] Starting daemon sets controller
I0715 22:33:42.149782       8 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller
I0715 22:33:42.150758       8 controller_utils.go:1027] Waiting for caches to sync for tokens controller
I0715 22:33:42.152276       8 controllermanager.go:455] Started "replicaset"
I0715 22:33:42.154565       8 controllermanager.go:455] Started "disruption"
I0715 22:33:42.156679       8 taint_manager.go:190] Sending events to api server.
I0715 22:33:42.157112       8 node_lifecycle_controller.go:324] Controller will taint node by condition.
I0715 22:33:42.157348       8 controllermanager.go:455] Started "nodelifecycle"
I0715 22:33:42.160002       8 controllermanager.go:455] Started "persistentvolume-binder"
I0715 22:33:42.162467       8 controllermanager.go:455] Started "endpoint"
W0715 22:33:42.165406       8 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
I0715 22:33:42.166218       8 controllermanager.go:455] Started "garbagecollector"
I0715 22:33:42.168910       8 controllermanager.go:455] Started "cronjob"
I0715 22:33:42.171928       8 controllermanager.go:455] Started "persistentvolume-expander"
I0715 22:33:42.174941       8 controllermanager.go:455] Started "replicationcontroller"
I0715 22:33:42.178741       8 replica_set.go:182] Starting replicaset controller
I0715 22:33:42.179291       8 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller
I0715 22:33:42.179771       8 disruption.go:288] Starting disruption controller
I0715 22:33:42.180181       8 controller_utils.go:1027] Waiting for caches to sync for disruption controller
I0715 22:33:42.180624       8 node_lifecycle_controller.go:361] Starting node controller
I0715 22:33:42.180939       8 controller_utils.go:1027] Waiting for caches to sync for taint controller
I0715 22:33:42.181416       8 pv_controller_base.go:268] Starting persistent volume controller
I0715 22:33:42.181688       8 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller
I0715 22:33:42.182197       8 endpoints_controller.go:149] Starting endpoint controller
I0715 22:33:42.182441       8 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
I0715 22:33:42.182886       8 garbagecollector.go:133] Starting garbage collector controller
I0715 22:33:42.183213       8 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller
I0715 22:33:42.184609       8 cronjob_controller.go:94] Starting CronJob Manager
I0715 22:33:42.186180       8 expand_controller.go:147] Starting expand controller
I0715 22:33:42.186657       8 controller_utils.go:1027] Waiting for caches to sync for expand controller
I0715 22:33:42.187191       8 replica_set.go:182] Starting replicationcontroller controller
I0715 22:33:42.187539       8 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller
I0715 22:33:42.188194       8 graph_builder.go:308] GraphBuilder running
I0715 22:33:42.281398       8 controller_utils.go:1034] Caches are synced for tokens controller
2019-07-15 22:33:42.282359 I | http: TLS handshake error from 127.0.0.1:51574: EOF
2019-07-15 22:33:42.451229 I | http: TLS handshake error from 127.0.0.1:51560: EOF
I0715 22:33:42.595750       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for { podtemplates}
I0715 22:33:42.596247       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {policy poddisruptionbudgets}
I0715 22:33:42.596508       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {rbac.authorization.k8s.io rolebindings}
I0715 22:33:42.596960       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {apps daemonsets}
I0715 22:33:42.597483       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {autoscaling horizontalpodautoscalers}
I0715 22:33:42.597692       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {rbac.authorization.k8s.io roles}
I0715 22:33:42.598201       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {extensions deployments}
I0715 22:33:42.598581       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {apps controllerrevisions}
I0715 22:33:42.599042       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for { endpoints}
I0715 22:33:42.599671       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {batch cronjobs}
I0715 22:33:42.600305       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {networking.k8s.io networkpolicies}
W0715 22:33:42.600544       8 shared_informer.go:312] resyncPeriod 69355973484695 is smaller than resyncCheckPeriod 84858032273712 and the informer has already started. Changing it to 84858032273712
I0715 22:33:42.600893       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {extensions daemonsets}
I0715 22:33:42.601117       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {apps replicasets}
I0715 22:33:42.601364       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {apps statefulsets}
W0715 22:33:42.601979       8 shared_informer.go:312] resyncPeriod 63949317566585 is smaller than resyncCheckPeriod 84858032273712 and the informer has already started. Changing it to 84858032273712
I0715 22:33:42.602323       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for { serviceaccounts}
I0715 22:33:42.602883       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {extensions replicasets}
I0715 22:33:42.603557       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for { limitranges}
I0715 22:33:42.603943       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {apps deployments}
I0715 22:33:42.604480       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {extensions ingresses}
I0715 22:33:42.605531       8 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for {batch jobs}
E0715 22:33:42.606992       8 resource_quota_controller.go:173] initial monitor sync has error: [couldn't start monitor for resource {"project.cattle.io" "v3" "pipelinesettings"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelinesettings", couldn't start monitor for resource {"management.cattle.io" "v3" "globaldnsproviders"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnsproviders", couldn't start monitor for resource {"management.cattle.io" "v3" "projectmonitorgraphs"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectmonitorgraphs", couldn't start monitor for resource {"management.cattle.io" "v3" "projectroletemplatebindings"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectroletemplatebindings", couldn't start monitor for resource {"project.cattle.io" "v3" "apps"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=apps", couldn't start monitor for resource {"management.cattle.io" "v3" "projectnetworkpolicies"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectnetworkpolicies", couldn't start monitor for resource {"management.cattle.io" "v3" "preferences"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=preferences", couldn't start monitor for resource {"management.cattle.io" "v3" "clusteralertrules"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertrules", couldn't start monitor for resource {"management.cattle.io" "v3" "clusterloggings"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterloggings", couldn't start monitor for resource {"project.cattle.io" "v3" "sourcecodecredentials"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodecredentials", couldn't start monitor for resource {"management.cattle.io" "v3" "projectalertgroups"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertgroups", couldn't start monitor for resource {"management.cattle.io" "v3" "projectcatalogs"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectcatalogs", couldn't start monitor for resource {"management.cattle.io" "v3" "monitormetrics"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=monitormetrics", couldn't start monitor for resource {"management.cattle.io" "v3" "clusteralerts"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralerts", couldn't start monitor for resource {"extensions" "v1beta1" "networkpolicies"}: unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource {"management.cattle.io" "v3" "clustermonitorgraphs"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clustermonitorgraphs", couldn't start monitor for resource {"management.cattle.io" "v3" "multiclusterapprevisions"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapprevisions", couldn't start monitor for resource {"management.cattle.io" "v3" "nodetemplates"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=nodetemplates", couldn't start monitor for resource {"management.cattle.io" "v3" "nodes"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=nodes", couldn't start monitor for resource {"project.cattle.io" "v3" "pipelineexecutions"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelineexecutions", couldn't start monitor for resource {"project.cattle.io" "v3" "pipelines"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelines", couldn't start monitor for resource {"management.cattle.io" "v3" "projectalertrules"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertrules", couldn't start monitor for resource {"management.cattle.io" "v3" "projectloggings"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectloggings", couldn't start monitor for resource {"project.cattle.io" "v3" "apprevisions"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=apprevisions", couldn't start monitor for resource {"management.cattle.io" "v3" "nodepools"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=nodepools", couldn't start monitor for resource {"management.cattle.io" "v3" "catalogtemplateversions"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplateversions", couldn't start monitor for resource {"management.cattle.io" "v3" "notifiers"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=notifiers", couldn't start monitor for resource {"project.cattle.io" "v3" "sourcecodeproviderconfigs"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs", couldn't start monitor for resource {"management.cattle.io" "v3" "projects"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projects", couldn't start monitor for resource {"management.cattle.io" "v3" "clustercatalogs"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clustercatalogs", couldn't start monitor for resource {"management.cattle.io" "v3" "catalogtemplates"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplates", couldn't start monitor for resource {"project.cattle.io" "v3" "sourcecoderepositories"}: unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecoderepositories", couldn't start monitor for resource {"management.cattle.io" "v3" "podsecuritypolicytemplateprojectbindings"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings", couldn't start monitor for resource {"management.cattle.io" "v3" "clusterroletemplatebindings"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings", couldn't start monitor for resource {"management.cattle.io" "v3" "globaldnses"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnses", couldn't start monitor for resource {"management.cattle.io" "v3" "multiclusterapps"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapps", couldn't start monitor for resource {"management.cattle.io" "v3" "etcdbackups"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=etcdbackups", couldn't start monitor for resource {"management.cattle.io" "v3" "clusteralertgroups"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertgroups", couldn't start monitor for resource {"management.cattle.io" "v3" "clusterregistrationtokens"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterregistrationtokens", couldn't start monitor for resource {"management.cattle.io" "v3" "projectalerts"}: unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalerts"]
I0715 22:33:42.619611       8 controllermanager.go:455] Started "resourcequota"
I0715 22:33:42.622470       8 controllermanager.go:455] Started "job"
I0715 22:33:42.625526       8 controllermanager.go:455] Started "service"
I0715 22:33:42.627673       8 controllermanager.go:455] Started "clusterrole-aggregation"
I0715 22:33:42.630568       8 controllermanager.go:455] Started "pvc-protection"
I0715 22:33:42.633390       8 resource_quota_controller.go:278] Starting resource quota controller
I0715 22:33:42.633497       8 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
I0715 22:33:42.635544       8 service_controller.go:175] Starting service controller
I0715 22:33:42.636271       8 controller_utils.go:1027] Waiting for caches to sync for service controller
I0715 22:33:42.636594       8 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0715 22:33:42.636639       8 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
I0715 22:33:42.636764       8 pvc_protection_controller.go:99] Starting PVC protection controller
I0715 22:33:42.636816       8 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller
I0715 22:33:42.640482       8 job_controller.go:143] Starting job controller
I0715 22:33:42.641618       8 controller_utils.go:1027] Waiting for caches to sync for job controller
I0715 22:33:42.642660       8 resource_quota_monitor.go:301] QuotaMonitor running
I0715 22:33:42.698395       8 controllermanager.go:455] Started "namespace"
I0715 22:33:42.700582       8 controllermanager.go:455] Started "serviceaccount"
I0715 22:33:42.705021       8 controllermanager.go:455] Started "horizontalpodautoscaling"
I0715 22:33:42.707709       8 controllermanager.go:455] Started "statefulset"
I0715 22:33:42.709753       8 controllermanager.go:455] Started "ttl"
I0715 22:33:42.711758       8 node_ipam_controller.go:95] Sending events to api server.
I0715 22:33:42.712793       8 namespace_controller.go:186] Starting namespace controller
I0715 22:33:42.713192       8 controller_utils.go:1027] Waiting for caches to sync for namespace controller
I0715 22:33:42.713627       8 serviceaccounts_controller.go:115] Starting service account controller
I0715 22:33:42.713890       8 controller_utils.go:1027] Waiting for caches to sync for service account controller
I0715 22:33:42.714274       8 horizontal.go:156] Starting HPA controller
I0715 22:33:42.714474       8 controller_utils.go:1027] Waiting for caches to sync for HPA controller
I0715 22:33:42.714863       8 stateful_set.go:151] Starting stateful set controller
I0715 22:33:42.715048       8 controller_utils.go:1027] Waiting for caches to sync for stateful set controller
I0715 22:33:42.715467       8 ttl_controller.go:116] Starting TTL controller
I0715 22:33:42.715715       8 controller_utils.go:1027] Waiting for caches to sync for TTL controller
2019/07/15 22:33:44 [INFO] password fields {"elasticsearchConfig":{"authPassword":".."},"fluentForwarderConfig":{"fluentServers":{"password":"..","sharedKey":".."}},"kafkaConfig":{"saslPassword":".."},"splunkConfig":{"token":".."},"syslogConfig":{"token":".."}}
2019/07/15 22:33:44 [INFO] password fields {"elasticsearchConfig":{"authPassword":".."},"fluentForwarderConfig":{"fluentServers":{"password":"..","sharedKey":".."}},"kafkaConfig":{"saslPassword":".."},"splunkConfig":{"token":".."},"syslogConfig":{"token":".."}}
2019/07/15 22:33:45 [INFO] Starting API controllers
2019/07/15 22:33:47 [FATAL] error running the jail command: cp: cannot stat '/lib64': No such file or directory
: exit status 1
2019/07/15 22:33:51 [INFO] Rancher version v2.2.5 is starting
2019/07/15 22:33:51 [INFO] Listening on /tmp/log.sock
2019/07/15 22:33:51 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}
2019/07/15 22:33:51 [INFO] Running etcd --data-dir=management-state/etcd
2019-07-15 22:33:51.967524 W | etcdmain: running etcd on unsupported architecture "arm64" since ETCD_UNSUPPORTED_ARCH is set
2019-07-15 22:33:51.969180 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=arm64
2019-07-15 22:33:51.969382 I | etcdmain: etcd Version: 3.2.13
2019-07-15 22:33:51.969424 I | etcdmain: Git SHA: Not provided (use ./build instead of go build)
2019-07-15 22:33:51.969460 I | etcdmain: Go Version: go1.11
2019-07-15 22:33:51.969493 I | etcdmain: Go OS/Arch: linux/arm64
2019-07-15 22:33:51.969548 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2019-07-15 22:33:51.970269 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-15 22:33:51.971274 I | embed: listening for peers on http://localhost:2380
2019-07-15 22:33:51.972119 I | embed: listening for client requests on localhost:2379
2019-07-15 22:33:51.985520 I | etcdserver: name = default
2019-07-15 22:33:51.985615 I | etcdserver: data dir = management-state/etcd
2019-07-15 22:33:51.985657 I | etcdserver: member dir = management-state/etcd/member
2019-07-15 22:33:51.985693 I | etcdserver: heartbeat = 100ms
2019-07-15 22:33:51.985726 I | etcdserver: election = 1000ms
2019-07-15 22:33:51.985763 I | etcdserver: snapshot count = 100000
2019-07-15 22:33:51.985853 I | etcdserver: advertise client URLs = http://localhost:2379
2019-07-15 22:33:52.033294 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 531
2019-07-15 22:33:52.034316 I | raft: 8e9e05c52164694d became follower at term 18
2019-07-15 22:33:52.034472 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 18, commit: 531, applied: 0, lastindex: 531, lastterm: 18]
2019-07-15 22:33:52.053646 W | auth: simple token is not cryptographically signed
2019-07-15 22:33:52.060607 I | etcdserver: starting server... [version: 3.2.13, cluster version: to_be_decided]
2019-07-15 22:33:52.064521 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2019-07-15 22:33:52.065304 N | etcdserver/membership: set the initial cluster version to 3.2
2019-07-15 22:33:52.065606 I | etcdserver/api: enabled capabilities for version 3.2
2019-07-15 22:33:52.936269 I | raft: 8e9e05c52164694d is starting a new election at term 18
2019-07-15 22:33:52.936623 I | raft: 8e9e05c52164694d became candidate at term 19
2019-07-15 22:33:52.936726 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 19
2019-07-15 22:33:52.936831 I | raft: 8e9e05c52164694d became leader at term 19
2019-07-15 22:33:52.936902 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 19
2019-07-15 22:33:52.941427 I | embed: ready to serve client requests
2019-07-15 22:33:52.941977 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2019-07-15 22:33:52.942916 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
I0715 22:33:52.976270       8 server.go:525] external host was not specified, using 127.0.0.1
I0715 22:34:04.527962       8 http.go:110] HTTP2 has been explicitly disabled
I0715 22:34:04.544197       8 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,ServiceAccount,DefaultStorageClass.
I0715 22:34:04.544375       8 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ServiceAccount.
I0715 22:34:04.549977       8 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,ServiceAccount,DefaultStorageClass.
I0715 22:34:04.550161       8 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ServiceAccount.
I0715 22:34:04.619785       8 master.go:215] Using reconciler: lease
W0715 22:34:05.214639       8 genericapiserver.go:306] Skipping API batch/v2alpha1 because it has no resources.
I0715 22:34:05.539045       8 secure_serving.go:116] Serving securely on 127.0.0.1:6443
I0715 22:34:05.547803       8 crd_finalizer.go:242] Starting CRDFinalizer
I0715 22:34:05.549358       8 customresource_discovery_controller.go:199] Starting DiscoveryController
I0715 22:34:05.550368       8 naming_controller.go:284] Starting NamingConditionController
I0715 22:34:05.551462       8 establishing_controller.go:73] Starting EstablishingController
I0715 22:34:05.557957       8 http.go:110] HTTP2 has been explicitly disabled
I0715 22:34:05.586300       8 server.go:128] Version: v1.12.2-lite5
W0715 22:34:05.586586       8 defaults.go:210] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0715 22:34:05.593512       8 authorization.go:47] Authorization is disabled
W0715 22:34:05.594505       8 authentication.go:55] Authentication is disabled
I0715 22:34:05.595208       8 deprecated_insecure_serving.go:48] Serving healthz insecurely on [::]:10251
I0715 22:34:05.595727       8 controllermanager.go:135] Version: v1.12.2-lite5
I0715 22:34:05.600722       8 deprecated_insecure_serving.go:50] Serving insecurely on [::]:10252
2019/07/15 22:34:06 [INFO] Running in single server mode, will not peer connections
I0715 22:34:06.597215       8 storage_scheduling.go:96] all system priority classes are created successfully or already exist.
I0715 22:34:07.076650       8 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0715 22:34:07.177169       8 controller_utils.go:1034] Caches are synced for scheduler controller
I0715 22:34:07.200260       8 leaderelection.go:187] attempting to acquire leader lease  kube-system/kube-scheduler...
I0715 22:34:08.426697       8 controllermanager.go:455] Started "horizontalpodautoscaling"
I0715 22:34:08.431735       8 node_ipam_controller.go:95] Sending events to api server.
I0715 22:34:08.433017       8 controller_utils.go:1027] Waiting for caches to sync for tokens controller
I0715 22:34:08.438368       8 horizontal.go:156] Starting HPA controller
I0715 22:34:08.438604       8 controller_utils.go:1027] Waiting for caches to sync for HPA controller
I0715 22:34:08.575396       8 controller_utils.go:1034] Caches are synced for tokens controller
2019/07/15 22:34:10 [INFO] password fields {"elasticsearchConfig":{"authPassword":".."},"fluentForwarderConfig":{"fluentServers":{"password":"..","sharedKey":".."}},"kafkaConfig":{"saslPassword":".."},"splunkConfig":{"token":".."},"syslogConfig":{"token":".."}}
2019/07/15 22:34:10 [INFO] password fields {"elasticsearchConfig":{"authPassword":".."},"fluentForwarderConfig":{"fluentServers":{"password":"..","sharedKey":".."}},"kafkaConfig":{"saslPassword":".."},"splunkConfig":{"token":".."},"syslogConfig":{"token":".."}}
2019/07/15 22:34:11 [INFO] Starting API controllers
2019/07/15 22:34:13 [FATAL] error running the jail command: cp: cannot stat '/lib64': No such file or directory
: exit status 1
$ docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.4
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:24:33 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.3-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       d7080c1
  Built:            Wed Feb 20 02:26:52 2019
  OS/Arch:          linux/arm64
  Experimental:     false

Metadata

Metadata

Labels

area/armarea/serverkind/bugIssues that are defects reported by users or that we know have reached a real release

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions