* * ==> Audit <== * |---------|---------------------------|----------|---------|--------------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|---------------------------|----------|---------|--------------|---------------------|---------------------| | delete | | minikube | shotler | v1.30.1 | 01 May 23 14:44 -04 | 01 May 23 14:44 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.1 | 01 May 23 14:44 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 14:46 -04 | 01 May 23 14:46 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.1 | 01 May 23 14:47 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 14:49 -04 | 01 May 23 14:49 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.1 | 01 May 23 14:50 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 14:55 -04 | 01 May 23 14:55 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.1 | 01 May 23 15:03 -04 | 01 May 23 15:04 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 15:11 -04 | 01 May 23 15:11 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.1 | 01 May 23 15:11 -04 | 01 May 23 15:11 -04 | | node | delete minikube-m02 | minikube | shotler | v1.30.1 | 01 May 23 15:13 -04 | 01 May 23 15:13 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 15:13 -04 | 01 May 23 15:13 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.0 | 01 May 23 15:13 -04 | 01 May 23 15:14 -04 | | delete | | minikube | shotler | v1.30.0 | 01 May 23 15:17 -04 | 01 May 23 15:17 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v1.30.1 | 01 May 23 15:17 -04 | | | delete | | minikube | shotler | v1.30.0 | 01 May 23 15:26 -04 | 01 May 23 15:27 -04 | | start | --driver=docker | minikube | shotler | v1.30.1 | 01 May 23 15:27 -04 | 01 May 23 15:27 -04 | | node | add | minikube | shotler | v1.30.1 | 01 May 23 15:28 -04 | | | node | delete minikube-m02 | minikube | shotler | v1.30.1 | 01 May 23 15:30 -04 | 01 May 23 15:30 -04 | | node | add | minikube | shotler | v1.30.0 | 01 May 23 15:30 -04 | 01 May 23 15:30 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 15:37 -04 | 01 May 23 15:37 -04 | | node | | minikube | shotler | v0.0.0-unset | 01 May 23 16:02 -04 | | | node | list | minikube | shotler | v0.0.0-unset | 01 May 23 16:03 -04 | | | start | --driver=docker | minikube | shotler | v0.0.0-unset | 01 May 23 16:03 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 16:05 -04 | 01 May 23 16:05 -04 | | start | --driver=docker | minikube | shotler | v0.0.0-unset | 01 May 23 16:12 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 16:14 -04 | 01 May 23 16:14 -04 | | start | --driver=docker | minikube | shotler | v0.0.0-unset | 01 May 23 16:17 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 16:18 -04 | 01 May 23 16:18 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 16:18 -04 | | | start | --driver=docker | minikube | shotler | v0.0.0-unset | 01 May 23 16:19 -04 | | | start | --driver=docker | minikube | shotler | v0.0.0-unset | 01 May 23 16:26 -04 | | | delete | | minikube | shotler | v1.30.1 | 01 May 23 16:29 -04 | 01 May 23 16:29 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 16:29 -04 | 01 May 23 16:29 -04 | | start | --driver=docker | minikube | shotler | v0.0.0-unset | 01 May 23 19:38 -04 | 01 May 23 19:48 -04 | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:49 -04 | 01 May 23 19:49 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 19:49 -04 | 01 May 23 19:50 -04 | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:50 -04 | 01 May 23 19:50 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 19:51 -04 | 01 May 23 19:51 -04 | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:52 -04 | 01 May 23 19:52 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 19:52 -04 | 01 May 23 19:52 -04 | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:53 -04 | 01 May 23 19:53 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 19:54 -04 | 01 May 23 19:55 -04 | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:55 -04 | 01 May 23 19:57 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 19:58 -04 | 01 May 23 19:58 -04 | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:58 -04 | | | node | add | minikube | shotler | v0.0.0-unset | 01 May 23 19:59 -04 | 01 May 23 20:05 -04 | | node | delete minikube-m02 | minikube | shotler | v0.0.0-unset | 01 May 23 20:06 -04 | 01 May 23 20:06 -04 | | delete | | minikube | shotler | v0.0.0-unset | 01 May 23 20:10 -04 | 01 May 23 20:10 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v0.0.0-unset | 01 May 23 20:10 -04 | 01 May 23 20:33 -04 | | delete | | minikube | shotler | v0.0.0-unset | 01 May 23 20:39 -04 | 01 May 23 20:39 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 20:39 -04 | 01 May 23 20:40 -04 | | start | --driver=docker --nodes 2 | minikube | shotler | v0.0.0-unset | 01 May 23 20:40 -04 | 01 May 23 20:42 -04 | | delete | | minikube | shotler | v0.0.0-unset | 01 May 23 20:43 -04 | 01 May 23 20:43 -04 | | start | --driver=docker --nodes 4 | minikube | shotler | v1.30.1 | 01 May 23 20:46 -04 | 01 May 23 20:47 -04 | | node | add | minikube | shotler | v1.30.1 | 01 May 23 20:49 -04 | 01 May 23 20:49 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 21:00 -04 | 01 May 23 21:00 -04 | | start | --driver=docker --nodes 4 | minikube | shotler | v1.30.1 | 01 May 23 21:08 -04 | 01 May 23 21:09 -04 | | delete | | minikube | shotler | v1.30.1 | 01 May 23 22:06 -04 | 01 May 23 22:06 -04 | | start | --driver=docker --nodes 4 | minikube | shotler | v1.30.0 | 01 May 23 22:06 -04 | 01 May 23 22:07 -04 | |---------|---------------------------|----------|---------|--------------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2023/05/01 22:06:40 Running on machine: temple Binary: Built with gc go1.20.2 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0501 22:06:40.463595 325086 out.go:296] Setting OutFile to fd 1 ... I0501 22:06:40.463921 325086 out.go:348] isatty.IsTerminal(1) = true I0501 22:06:40.463923 325086 out.go:309] Setting ErrFile to fd 2... I0501 22:06:40.463926 325086 out.go:348] isatty.IsTerminal(2) = true I0501 22:06:40.463997 325086 root.go:336] Updating PATH: /home/shotler/.minikube/bin I0501 22:06:40.464591 325086 out.go:303] Setting JSON to false I0501 22:06:40.466509 325086 start.go:125] hostinfo: {"hostname":"temple","uptime":8934,"bootTime":1682984266,"procs":798,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"5.19.0-41-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"d6d9b5aa-c3a4-40d2-a26a-ecb80558464a"} I0501 22:06:40.466534 325086 start.go:135] virtualization: kvm host I0501 22:06:40.467437 325086 out.go:177] ๐Ÿ˜„ minikube v1.30.0 on Ubuntu 22.04 I0501 22:06:40.468628 325086 notify.go:220] Checking for updates... I0501 22:06:40.469203 325086 driver.go:365] Setting default libvirt URI to qemu:///system I0501 22:06:40.481072 325086 docker.go:121] docker version: linux-23.0.4:Docker Engine - Community I0501 22:06:40.481125 325086 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0501 22:06:40.504081 325086 info.go:266] docker info: {ID:6S7U:53FP:3OGM:6MHF:YE2L:CUYZ:OVED:SJF4:AAUR:LW6V:NFJM:EFCN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:32 SystemTime:2023-05-01 22:06:40.499658392 -0400 -04 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.19.0-41-generic OperatingSystem:Ubuntu 22.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:33557098496 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:temple Labels:[] ExperimentalBuild:false ServerVersion:23.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2806fc1057397dbaeefbea0e4e17bddfbd388f38 Expected:2806fc1057397dbaeefbea0e4e17bddfbd388f38} RuncCommit:{ID:v1.1.5-0-gf19387a Expected:v1.1.5-0-gf19387a} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.4] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.17.2]] Warnings:}} I0501 22:06:40.504137 325086 docker.go:294] overlay module found I0501 22:06:40.504777 325086 out.go:177] โœจ Using the docker driver based on user configuration I0501 22:06:40.505696 325086 start.go:295] selected driver: docker I0501 22:06:40.505699 325086 start.go:859] validating driver "docker" against I0501 22:06:40.505705 325086 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0501 22:06:40.505771 325086 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0501 22:06:40.527192 325086 info.go:266] docker info: {ID:6S7U:53FP:3OGM:6MHF:YE2L:CUYZ:OVED:SJF4:AAUR:LW6V:NFJM:EFCN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:32 SystemTime:2023-05-01 22:06:40.52336755 -0400 -04 LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.19.0-41-generic OperatingSystem:Ubuntu 22.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:33557098496 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:temple Labels:[] ExperimentalBuild:false ServerVersion:23.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2806fc1057397dbaeefbea0e4e17bddfbd388f38 Expected:2806fc1057397dbaeefbea0e4e17bddfbd388f38} RuncCommit:{ID:v1.1.5-0-gf19387a Expected:v1.1.5-0-gf19387a} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.4] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.17.2]] Warnings:}} I0501 22:06:40.527251 325086 start_flags.go:305] no existing cluster config was found, will generate one from the flags I0501 22:06:40.528296 325086 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=32002MB, container=32002MB I0501 22:06:40.528371 325086 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true] I0501 22:06:40.528975 325086 out.go:177] ๐Ÿ“Œ Using Docker driver with root privileges I0501 22:06:40.529610 325086 cni.go:84] Creating CNI manager for "" I0501 22:06:40.529613 325086 cni.go:136] 0 nodes found, recommending kindnet I0501 22:06:40.529617 325086 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni I0501 22:06:40.529621 325086 start_flags.go:319] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 22:06:40.530854 325086 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0501 22:06:40.531508 325086 cache.go:120] Beginning downloading kic base image for docker with docker I0501 22:06:40.532120 325086 out.go:177] ๐Ÿšœ Pulling base image ... I0501 22:06:40.533146 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:06:40.533163 325086 preload.go:148] Found local preload: /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 I0501 22:06:40.533167 325086 cache.go:57] Caching tarball of preloaded images I0501 22:06:40.533204 325086 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon I0501 22:06:40.533226 325086 preload.go:174] Found /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0501 22:06:40.533231 325086 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker I0501 22:06:40.533425 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:06:40.533433 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/config.json: {Name:mkab2f6b2a62cb03ebac86327f4964effe447548 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:40.541141 325086 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon, skipping pull I0501 22:06:40.541145 325086 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 exists in daemon, skipping load I0501 22:06:40.541151 325086 cache.go:193] Successfully downloaded all kic artifacts I0501 22:06:40.541160 325086 start.go:364] acquiring machines lock for minikube: {Name:mk1600efbf43b688e3d7ed7f3390407a559b1fed Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0501 22:06:40.541178 325086 start.go:368] acquired machines lock for "minikube" in 13.36ยตs I0501 22:06:40.541183 325086 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0501 22:06:40.541219 325086 start.go:125] createHost starting for "" (driver="docker") I0501 22:06:40.541817 325086 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... I0501 22:06:40.542146 325086 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0501 22:06:40.542158 325086 client.go:168] LocalClient.Create starting I0501 22:06:40.542342 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/ca.pem I0501 22:06:40.542356 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:06:40.542370 325086 main.go:141] libmachine: Parsing certificate... I0501 22:06:40.542392 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/cert.pem I0501 22:06:40.542398 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:06:40.542404 325086 main.go:141] libmachine: Parsing certificate... I0501 22:06:40.542582 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:06:40.549333 325086 network_create.go:76] Found existing network {name:minikube subnet:0xc00164ea80 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500} I0501 22:06:40.549347 325086 kic.go:117] calculated static IP "192.168.49.2" for the "minikube" container I0501 22:06:40.549385 325086 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0501 22:06:40.556854 325086 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0501 22:06:40.564412 325086 oci.go:103] Successfully created a docker volume minikube I0501 22:06:40.564446 325086 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -d /var/lib I0501 22:06:41.172312 325086 oci.go:107] Successfully prepared a docker volume minikube I0501 22:06:41.172324 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:06:41.172335 325086 kic.go:190] Starting extracting preloaded images to volume ... I0501 22:06:41.172383 325086 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir I0501 22:06:43.088452 325086 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir: (1.916037653s) I0501 22:06:43.088475 325086 kic.go:199] duration metric: took 1.916137 seconds to extract preloaded images to volume W0501 22:06:43.088538 325086 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0501 22:06:43.088553 325086 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0501 22:06:43.088589 325086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0501 22:06:43.111131 325086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 I0501 22:06:43.415983 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0501 22:06:43.424740 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 22:06:43.433915 325086 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0501 22:06:43.486706 325086 oci.go:144] the created container "minikube" has a running status. I0501 22:06:43.486716 325086 kic.go:221] Creating ssh key for kic: /home/shotler/.minikube/machines/minikube/id_rsa... I0501 22:06:43.605072 325086 kic_runner.go:191] docker (temp): /home/shotler/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0501 22:06:43.650790 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 22:06:43.660169 325086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0501 22:06:43.660175 325086 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0501 22:06:43.735366 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 22:06:43.744409 325086 machine.go:88] provisioning docker machine ... I0501 22:06:43.744425 325086 ubuntu.go:169] provisioning hostname "minikube" I0501 22:06:43.744459 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:43.752926 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:06:43.753164 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32873 } I0501 22:06:43.753170 325086 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0501 22:06:43.753536 325086 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35884->127.0.0.1:32873: read: connection reset by peer I0501 22:06:46.856718 325086 main.go:141] libmachine: SSH cmd err, output: : minikube I0501 22:06:46.856768 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:46.865635 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:06:46.865903 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32873 } I0501 22:06:46.865912 325086 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0501 22:06:46.956051 325086 main.go:141] libmachine: SSH cmd err, output: : I0501 22:06:46.956063 325086 ubuntu.go:175] set auth options {CertDir:/home/shotler/.minikube CaCertPath:/home/shotler/.minikube/certs/ca.pem CaPrivateKeyPath:/home/shotler/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/shotler/.minikube/machines/server.pem ServerKeyPath:/home/shotler/.minikube/machines/server-key.pem ClientKeyPath:/home/shotler/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/shotler/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/shotler/.minikube} I0501 22:06:46.956076 325086 ubuntu.go:177] setting up certificates I0501 22:06:46.956086 325086 provision.go:83] configureAuth start I0501 22:06:46.956134 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0501 22:06:46.964630 325086 provision.go:138] copyHostCerts I0501 22:06:46.964652 325086 exec_runner.go:144] found /home/shotler/.minikube/ca.pem, removing ... I0501 22:06:46.964655 325086 exec_runner.go:207] rm: /home/shotler/.minikube/ca.pem I0501 22:06:46.964700 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/ca.pem --> /home/shotler/.minikube/ca.pem (1078 bytes) I0501 22:06:46.964751 325086 exec_runner.go:144] found /home/shotler/.minikube/cert.pem, removing ... I0501 22:06:46.964753 325086 exec_runner.go:207] rm: /home/shotler/.minikube/cert.pem I0501 22:06:46.964771 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/cert.pem --> /home/shotler/.minikube/cert.pem (1123 bytes) I0501 22:06:46.964808 325086 exec_runner.go:144] found /home/shotler/.minikube/key.pem, removing ... I0501 22:06:46.964810 325086 exec_runner.go:207] rm: /home/shotler/.minikube/key.pem I0501 22:06:46.964826 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/key.pem --> /home/shotler/.minikube/key.pem (1679 bytes) I0501 22:06:46.964858 325086 provision.go:112] generating server cert: /home/shotler/.minikube/machines/server.pem ca-key=/home/shotler/.minikube/certs/ca.pem private-key=/home/shotler/.minikube/certs/ca-key.pem org=shotler.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0501 22:06:47.001661 325086 provision.go:172] copyRemoteCerts I0501 22:06:47.001697 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0501 22:06:47.001720 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:47.009969 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:06:47.081873 325086 ssh_runner.go:362] scp /home/shotler/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0501 22:06:47.091863 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0501 22:06:47.100781 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0501 22:06:47.109416 325086 provision.go:86] duration metric: configureAuth took 153.326246ms I0501 22:06:47.109422 325086 ubuntu.go:193] setting minikube options for container-runtime I0501 22:06:47.109513 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:06:47.109548 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:47.118321 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:06:47.118569 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32873 } I0501 22:06:47.118573 325086 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0501 22:06:47.216173 325086 main.go:141] libmachine: SSH cmd err, output: : overlay I0501 22:06:47.216183 325086 ubuntu.go:71] root file system type: overlay I0501 22:06:47.216248 325086 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0501 22:06:47.216297 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:47.225338 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:06:47.225609 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32873 } I0501 22:06:47.225654 325086 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0501 22:06:47.324439 325086 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0501 22:06:47.324484 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:47.332637 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:06:47.332900 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32873 } I0501 22:06:47.332909 325086 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0501 22:06:48.404371 325086 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-03-27 16:16:18.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-05-02 02:06:47.322008981 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0501 22:06:48.404389 325086 machine.go:91] provisioned docker machine in 4.659973007s I0501 22:06:48.404397 325086 client.go:171] LocalClient.Create took 7.86223524s I0501 22:06:48.404406 325086 start.go:167] duration metric: libmachine.API.Create for "minikube" took 7.862259161s I0501 22:06:48.404410 325086 start.go:300] post-start starting for "minikube" (driver="docker") I0501 22:06:48.404416 325086 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0501 22:06:48.404466 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0501 22:06:48.404500 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:48.414486 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:06:48.492302 325086 ssh_runner.go:195] Run: cat /etc/os-release I0501 22:06:48.493815 325086 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0501 22:06:48.493825 325086 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0501 22:06:48.493831 325086 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0501 22:06:48.493834 325086 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0501 22:06:48.493839 325086 filesync.go:126] Scanning /home/shotler/.minikube/addons for local assets ... I0501 22:06:48.493863 325086 filesync.go:126] Scanning /home/shotler/.minikube/files for local assets ... I0501 22:06:48.493872 325086 start.go:303] post-start completed in 89.459178ms I0501 22:06:48.494072 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0501 22:06:48.502427 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:06:48.502544 325086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0501 22:06:48.502568 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:48.510828 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:06:48.580182 325086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0501 22:06:48.582109 325086 start.go:128] duration metric: createHost completed in 8.040886363s I0501 22:06:48.582114 325086 start.go:83] releasing machines lock for "minikube", held for 8.040932503s I0501 22:06:48.582145 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0501 22:06:48.590697 325086 ssh_runner.go:195] Run: cat /version.json I0501 22:06:48.590717 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:48.590767 325086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0501 22:06:48.590804 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:48.598715 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:06:48.598981 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} W0501 22:06:48.667914 325086 out.go:239] โ— Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.30.0 I0501 22:06:48.667976 325086 ssh_runner.go:195] Run: systemctl --version I0501 22:06:49.111988 325086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0501 22:06:49.114651 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0501 22:06:49.126243 325086 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0501 22:06:49.126282 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0501 22:06:49.133834 325086 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0501 22:06:49.133841 325086 start.go:481] detecting cgroup driver to use... I0501 22:06:49.133858 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:06:49.133924 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:06:49.140489 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0501 22:06:49.144430 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0501 22:06:49.148381 325086 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0501 22:06:49.148402 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0501 22:06:49.152515 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:06:49.156498 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0501 22:06:49.160453 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:06:49.164457 325086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0501 22:06:49.168215 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0501 22:06:49.172223 325086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0501 22:06:49.175315 325086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0501 22:06:49.178500 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:06:49.266087 325086 ssh_runner.go:195] Run: sudo systemctl restart containerd I0501 22:06:49.302189 325086 start.go:481] detecting cgroup driver to use... I0501 22:06:49.302210 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:06:49.302247 325086 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0501 22:06:49.307571 325086 cruntime.go:276] skipping containerd shutdown because we are bound to it I0501 22:06:49.307605 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0501 22:06:49.312309 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:06:49.318541 325086 ssh_runner.go:195] Run: which cri-dockerd I0501 22:06:49.320005 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0501 22:06:49.323006 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0501 22:06:49.329755 325086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0501 22:06:49.372901 325086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0501 22:06:49.420582 325086 docker.go:538] configuring docker to use "systemd" as cgroup driver... I0501 22:06:49.420592 325086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0501 22:06:49.427378 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:06:49.472735 325086 ssh_runner.go:195] Run: sudo systemctl restart docker I0501 22:06:50.746167 325086 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.273415736s) I0501 22:06:50.746232 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:06:50.826208 325086 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0501 22:06:50.868983 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:06:50.920560 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:06:50.964990 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0501 22:06:50.970992 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:06:51.000362 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0501 22:06:51.086458 325086 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock I0501 22:06:51.086516 325086 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0501 22:06:51.088141 325086 start.go:549] Will wait 60s for crictl version I0501 22:06:51.088169 325086 ssh_runner.go:195] Run: which crictl I0501 22:06:51.089591 325086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0501 22:06:51.145158 325086 start.go:565] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 23.0.2 RuntimeApiVersion: v1alpha2 I0501 22:06:51.145191 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:06:51.182034 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:06:51.193861 325086 out.go:204] ๐Ÿณ Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... I0501 22:06:51.193902 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:06:51.201996 325086 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0501 22:06:51.203514 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:06:51.208466 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:06:51.208496 325086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0501 22:06:51.217100 325086 docker.go:639] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.26.3 registry.k8s.io/kube-scheduler:v1.26.3 registry.k8s.io/kube-controller-manager:v1.26.3 registry.k8s.io/kube-proxy:v1.26.3 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/pause:3.9 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0501 22:06:51.217106 325086 docker.go:569] Images already preloaded, skipping extraction I0501 22:06:51.217130 325086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0501 22:06:51.225558 325086 docker.go:639] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.26.3 registry.k8s.io/kube-controller-manager:v1.26.3 registry.k8s.io/kube-scheduler:v1.26.3 registry.k8s.io/kube-proxy:v1.26.3 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/pause:3.9 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0501 22:06:51.225564 325086 cache_images.go:84] Images are preloaded, skipping loading I0501 22:06:51.225590 325086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0501 22:06:51.237107 325086 cni.go:84] Creating CNI manager for "" I0501 22:06:51.237112 325086 cni.go:136] 1 nodes found, recommending kindnet I0501 22:06:51.237257 325086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0501 22:06:51.237268 325086 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0501 22:06:51.237344 325086 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0501 22:06:51.237383 325086 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0501 22:06:51.237416 325086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0501 22:06:51.240834 325086 binaries.go:44] Found k8s binaries, skipping transfer I0501 22:06:51.240862 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0501 22:06:51.244527 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes) I0501 22:06:51.251233 325086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0501 22:06:51.257864 325086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2083 bytes) I0501 22:06:51.264404 325086 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0501 22:06:51.265725 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:06:51.270357 325086 certs.go:56] Setting up /home/shotler/.minikube/profiles/minikube for IP: 192.168.49.2 I0501 22:06:51.270365 325086 certs.go:186] acquiring lock for shared ca certs: {Name:mk43a023f6ece43e69e883f266f2820beecb179f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.270443 325086 certs.go:195] skipping minikubeCA CA generation: /home/shotler/.minikube/ca.key I0501 22:06:51.270477 325086 certs.go:195] skipping proxyClientCA CA generation: /home/shotler/.minikube/proxy-client-ca.key I0501 22:06:51.270507 325086 certs.go:315] generating minikube-user signed cert: /home/shotler/.minikube/profiles/minikube/client.key I0501 22:06:51.270514 325086 crypto.go:68] Generating cert /home/shotler/.minikube/profiles/minikube/client.crt with IP's: [] I0501 22:06:51.303034 325086 crypto.go:156] Writing cert to /home/shotler/.minikube/profiles/minikube/client.crt ... I0501 22:06:51.303038 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/client.crt: {Name:mk30b10e21ade04533b6a02ee41a8ffc3af80ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.303083 325086 crypto.go:164] Writing key to /home/shotler/.minikube/profiles/minikube/client.key ... I0501 22:06:51.303086 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/client.key: {Name:mk27a04827019485f90240c2d20fc2b6a9b0f96f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.303115 325086 certs.go:315] generating minikube signed cert: /home/shotler/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0501 22:06:51.303121 325086 crypto.go:68] Generating cert /home/shotler/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0501 22:06:51.351205 325086 crypto.go:156] Writing cert to /home/shotler/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0501 22:06:51.351208 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mka23cd624068f102323896965a40c0e31446b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.351246 325086 crypto.go:164] Writing key to /home/shotler/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0501 22:06:51.351248 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk6d41ee1c94b4414e3431cc29d7a432ef7b027f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.351272 325086 certs.go:333] copying /home/shotler/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/shotler/.minikube/profiles/minikube/apiserver.crt I0501 22:06:51.351310 325086 certs.go:337] copying /home/shotler/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/shotler/.minikube/profiles/minikube/apiserver.key I0501 22:06:51.351333 325086 certs.go:315] generating aggregator signed cert: /home/shotler/.minikube/profiles/minikube/proxy-client.key I0501 22:06:51.351337 325086 crypto.go:68] Generating cert /home/shotler/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0501 22:06:51.409792 325086 crypto.go:156] Writing cert to /home/shotler/.minikube/profiles/minikube/proxy-client.crt ... I0501 22:06:51.409795 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/proxy-client.crt: {Name:mk90806919a638735219b777a81fcb8868997e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.409832 325086 crypto.go:164] Writing key to /home/shotler/.minikube/profiles/minikube/proxy-client.key ... I0501 22:06:51.409834 325086 lock.go:35] WriteFile acquiring /home/shotler/.minikube/profiles/minikube/proxy-client.key: {Name:mke180aaa93db140b3cb96ed2e6e0d0ab06d96f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:51.409908 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca-key.pem (1679 bytes) I0501 22:06:51.409920 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca.pem (1078 bytes) I0501 22:06:51.409929 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/cert.pem (1123 bytes) I0501 22:06:51.409938 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/key.pem (1679 bytes) I0501 22:06:51.410221 325086 ssh_runner.go:362] scp /home/shotler/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0501 22:06:51.420009 325086 ssh_runner.go:362] scp /home/shotler/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0501 22:06:51.428626 325086 ssh_runner.go:362] scp /home/shotler/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0501 22:06:51.437170 325086 ssh_runner.go:362] scp /home/shotler/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0501 22:06:51.446284 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0501 22:06:51.455043 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0501 22:06:51.463712 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0501 22:06:51.472162 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0501 22:06:51.480803 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0501 22:06:51.489451 325086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0501 22:06:51.495775 325086 ssh_runner.go:195] Run: openssl version I0501 22:06:51.499113 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0501 22:06:51.502791 325086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0501 22:06:51.504288 325086 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jun 6 2022 /usr/share/ca-certificates/minikubeCA.pem I0501 22:06:51.504310 325086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0501 22:06:51.506434 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0501 22:06:51.510399 325086 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 22:06:51.510464 325086 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0501 22:06:51.518665 325086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0501 22:06:51.521822 325086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0501 22:06:51.524913 325086 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver I0501 22:06:51.524947 325086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0501 22:06:51.527948 325086 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0501 22:06:51.527959 325086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0501 22:06:51.551722 325086 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3 I0501 22:06:51.551752 325086 kubeadm.go:322] [preflight] Running pre-flight checks I0501 22:06:51.571295 325086 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification: I0501 22:06:51.571338 325086 kubeadm.go:322] KERNEL_VERSION: 5.19.0-41-generic I0501 22:06:51.571367 325086 kubeadm.go:322] OS: Linux I0501 22:06:51.571401 325086 kubeadm.go:322] CGROUPS_CPU: enabled I0501 22:06:51.571427 325086 kubeadm.go:322] CGROUPS_CPUSET: enabled I0501 22:06:51.571455 325086 kubeadm.go:322] CGROUPS_DEVICES: enabled I0501 22:06:51.571481 325086 kubeadm.go:322] CGROUPS_FREEZER: enabled I0501 22:06:51.571508 325086 kubeadm.go:322] CGROUPS_MEMORY: enabled I0501 22:06:51.571534 325086 kubeadm.go:322] CGROUPS_PIDS: enabled I0501 22:06:51.571566 325086 kubeadm.go:322] CGROUPS_HUGETLB: enabled I0501 22:06:51.571590 325086 kubeadm.go:322] CGROUPS_IO: enabled I0501 22:06:51.604417 325086 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster I0501 22:06:51.604468 325086 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection I0501 22:06:51.604524 325086 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0501 22:06:51.666241 325086 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0501 22:06:51.667036 325086 out.go:204] โ–ช Generating certificates and keys ... I0501 22:06:51.667086 325086 kubeadm.go:322] [certs] Using existing ca certificate authority I0501 22:06:51.667132 325086 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk I0501 22:06:51.776868 325086 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key I0501 22:06:51.857165 325086 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key I0501 22:06:51.922542 325086 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key I0501 22:06:51.999477 325086 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key I0501 22:06:52.117227 325086 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key I0501 22:06:52.117324 325086 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0501 22:06:52.171516 325086 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key I0501 22:06:52.171594 325086 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0501 22:06:52.387220 325086 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key I0501 22:06:52.426721 325086 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key I0501 22:06:52.524088 325086 kubeadm.go:322] [certs] Generating "sa" key and public key I0501 22:06:52.524157 325086 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0501 22:06:52.643946 325086 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file I0501 22:06:52.777753 325086 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0501 22:06:52.874612 325086 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0501 22:06:53.027539 325086 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0501 22:06:53.035460 325086 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0501 22:06:53.036006 325086 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0501 22:06:53.036037 325086 kubeadm.go:322] [kubelet-start] Starting the kubelet I0501 22:06:53.140176 325086 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0501 22:06:53.140959 325086 out.go:204] โ–ช Booting up control plane ... I0501 22:06:53.141013 325086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver" I0501 22:06:53.141734 325086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0501 22:06:53.142176 325086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler" I0501 22:06:53.142477 325086 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0501 22:06:53.143348 325086 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0501 22:06:57.144999 325086 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001574 seconds I0501 22:06:57.145074 325086 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0501 22:06:57.151398 325086 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0501 22:06:57.660968 325086 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs I0501 22:06:57.661081 325086 kubeadm.go:322] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0501 22:06:58.165740 325086 kubeadm.go:322] [bootstrap-token] Using token: 82prwx.7a9f1zwjfpf7o47q I0501 22:06:58.166401 325086 out.go:204] โ–ช Configuring RBAC rules ... I0501 22:06:58.166464 325086 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0501 22:06:58.168261 325086 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0501 22:06:58.171062 325086 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0501 22:06:58.172147 325086 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0501 22:06:58.173315 325086 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0501 22:06:58.174567 325086 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0501 22:06:58.179837 325086 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0501 22:06:58.356486 325086 kubeadm.go:322] [addons] Applied essential addon: CoreDNS I0501 22:06:58.570168 325086 kubeadm.go:322] [addons] Applied essential addon: kube-proxy I0501 22:06:58.570696 325086 kubeadm.go:322] I0501 22:06:58.570738 325086 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully! I0501 22:06:58.570741 325086 kubeadm.go:322] I0501 22:06:58.570797 325086 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user: I0501 22:06:58.570801 325086 kubeadm.go:322] I0501 22:06:58.570824 325086 kubeadm.go:322] mkdir -p $HOME/.kube I0501 22:06:58.570870 325086 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0501 22:06:58.570906 325086 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0501 22:06:58.570910 325086 kubeadm.go:322] I0501 22:06:58.570940 325086 kubeadm.go:322] Alternatively, if you are the root user, you can run: I0501 22:06:58.570943 325086 kubeadm.go:322] I0501 22:06:58.570967 325086 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf I0501 22:06:58.570969 325086 kubeadm.go:322] I0501 22:06:58.571004 325086 kubeadm.go:322] You should now deploy a pod network to the cluster. I0501 22:06:58.571047 325086 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0501 22:06:58.571085 325086 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0501 22:06:58.571087 325086 kubeadm.go:322] I0501 22:06:58.571135 325086 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities I0501 22:06:58.571177 325086 kubeadm.go:322] and service account keys on each node and then running the following as root: I0501 22:06:58.571179 325086 kubeadm.go:322] I0501 22:06:58.571225 325086 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 82prwx.7a9f1zwjfpf7o47q \ I0501 22:06:58.571281 325086 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed \ I0501 22:06:58.571292 325086 kubeadm.go:322] --control-plane I0501 22:06:58.571294 325086 kubeadm.go:322] I0501 22:06:58.571341 325086 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root: I0501 22:06:58.571343 325086 kubeadm.go:322] I0501 22:06:58.571388 325086 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 82prwx.7a9f1zwjfpf7o47q \ I0501 22:06:58.571444 325086 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed I0501 22:06:58.573155 325086 kubeadm.go:322] W0502 02:06:51.547042 1415 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0501 22:06:58.573228 325086 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet I0501 22:06:58.573343 325086 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.19.0-41-generic\n", err: exit status 1 I0501 22:06:58.573403 325086 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0501 22:06:58.573415 325086 cni.go:84] Creating CNI manager for "" I0501 22:06:58.573421 325086 cni.go:136] 1 nodes found, recommending kindnet I0501 22:06:58.574084 325086 out.go:177] ๐Ÿ”— Configuring CNI (Container Networking Interface) ... I0501 22:06:58.575032 325086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0501 22:06:58.577044 325086 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ... I0501 22:06:58.577049 325086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes) I0501 22:06:58.583894 325086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0501 22:06:58.925046 325086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0501 22:06:58.925084 325086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0501 22:06:58.925084 325086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.30.0 minikube.k8s.io/commit=ba4594e7b78814fd52a9376decb9c3d59c133712 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_05_01T22_06_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0501 22:06:58.960957 325086 kubeadm.go:1073] duration metric: took 35.908182ms to wait for elevateKubeSystemPrivileges. I0501 22:06:58.960977 325086 ops.go:34] apiserver oom_adj: -16 I0501 22:06:58.965222 325086 kubeadm.go:403] StartCluster complete in 7.454821675s I0501 22:06:58.965233 325086 settings.go:142] acquiring lock: {Name:mk155e1454d091eab768151c5488fd7bd4f1db7c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:58.965268 325086 settings.go:150] Updating kubeconfig: /home/shotler/.kube/config I0501 22:06:58.965663 325086 lock.go:35] WriteFile acquiring /home/shotler/.kube/config: {Name:mkd01185e566d13d204e07848aaade14cba16b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:06:58.965757 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0501 22:06:58.965872 325086 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] I0501 22:06:58.965911 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:06:58.965915 325086 addons.go:66] Setting default-storageclass=true in profile "minikube" I0501 22:06:58.965914 325086 addons.go:66] Setting storage-provisioner=true in profile "minikube" I0501 22:06:58.965924 325086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0501 22:06:58.965929 325086 addons.go:228] Setting addon storage-provisioner=true in "minikube" I0501 22:06:58.965960 325086 host.go:66] Checking if "minikube" exists ... I0501 22:06:58.966212 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 22:06:58.966313 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 22:06:58.976843 325086 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0501 22:06:58.977814 325086 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml I0501 22:06:58.977818 325086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0501 22:06:58.977846 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:58.978446 325086 addons.go:228] Setting addon default-storageclass=true in "minikube" I0501 22:06:58.978466 325086 host.go:66] Checking if "minikube" exists ... I0501 22:06:58.978705 325086 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0501 22:06:58.986951 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:06:58.987496 325086 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml I0501 22:06:58.987502 325086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0501 22:06:58.987536 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:06:58.997594 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:06:59.003248 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0501 22:06:59.060975 325086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0501 22:06:59.073470 325086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0501 22:06:59.091547 325086 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap I0501 22:06:59.167166 325086 out.go:177] ๐ŸŒŸ Enabled addons: storage-provisioner, default-storageclass I0501 22:06:59.168063 325086 addons.go:499] enable addons completed in 202.194761ms: enabled=[storage-provisioner default-storageclass] I0501 22:06:59.479654 325086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0501 22:06:59.479673 325086 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0501 22:06:59.480307 325086 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0501 22:06:59.481264 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0501 22:06:59.487533 325086 api_server.go:51] waiting for apiserver process to appear ... I0501 22:06:59.487570 325086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0501 22:06:59.492245 325086 api_server.go:71] duration metric: took 12.552041ms to wait for apiserver process to appear ... I0501 22:06:59.492251 325086 api_server.go:87] waiting for apiserver healthz status ... I0501 22:06:59.492258 325086 api_server.go:252] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0501 22:06:59.495658 325086 api_server.go:278] https://192.168.49.2:8443/healthz returned 200: ok I0501 22:06:59.496057 325086 api_server.go:140] control plane version: v1.26.3 I0501 22:06:59.496063 325086 api_server.go:130] duration metric: took 3.809278ms to wait for apiserver health ... I0501 22:06:59.496071 325086 system_pods.go:43] waiting for kube-system pods to appear ... I0501 22:06:59.499194 325086 system_pods.go:59] 5 kube-system pods found I0501 22:06:59.499203 325086 system_pods.go:61] "etcd-minikube" [22659361-51fe-4a99-b8c0-de00b47a47f8] Pending I0501 22:06:59.499206 325086 system_pods.go:61] "kube-apiserver-minikube" [35c6f542-2616-4fc2-852f-a9c9f8b6dac5] Pending I0501 22:06:59.499210 325086 system_pods.go:61] "kube-controller-manager-minikube" [7960140c-7438-4327-82df-032fcb12ea4d] Pending I0501 22:06:59.499212 325086 system_pods.go:61] "kube-scheduler-minikube" [c7d4eca0-745e-400b-9328-4a534c702a6e] Pending I0501 22:06:59.499217 325086 system_pods.go:61] "storage-provisioner" [c9d59bc4-0a57-4dd3-8eb9-d78c85cb633a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..) I0501 22:06:59.499220 325086 system_pods.go:74] duration metric: took 3.146151ms to wait for pod list to return data ... I0501 22:06:59.499226 325086 kubeadm.go:578] duration metric: took 19.534381ms to wait for : map[apiserver:true system_pods:true] ... I0501 22:06:59.499233 325086 node_conditions.go:102] verifying NodePressure condition ... I0501 22:06:59.500653 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:06:59.500662 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:06:59.500667 325086 node_conditions.go:105] duration metric: took 1.432219ms to run NodePressure ... I0501 22:06:59.500674 325086 start.go:228] waiting for startup goroutines ... I0501 22:06:59.500677 325086 start.go:233] waiting for cluster config update ... I0501 22:06:59.500683 325086 start.go:242] writing updated cluster config ... I0501 22:06:59.501369 325086 out.go:177] I0501 22:06:59.501921 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:06:59.501952 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:06:59.502636 325086 out.go:177] ๐Ÿ‘ Starting worker node minikube-m02 in cluster minikube I0501 22:06:59.503131 325086 cache.go:120] Beginning downloading kic base image for docker with docker I0501 22:06:59.503789 325086 out.go:177] ๐Ÿšœ Pulling base image ... I0501 22:06:59.504700 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:06:59.504707 325086 cache.go:57] Caching tarball of preloaded images I0501 22:06:59.504737 325086 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon I0501 22:06:59.504750 325086 preload.go:174] Found /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0501 22:06:59.504755 325086 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker I0501 22:06:59.504803 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:06:59.514149 325086 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon, skipping pull I0501 22:06:59.514158 325086 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 exists in daemon, skipping load I0501 22:06:59.514170 325086 cache.go:193] Successfully downloaded all kic artifacts I0501 22:06:59.514188 325086 start.go:364] acquiring machines lock for minikube-m02: {Name:mk525c09eea4c47e51a3fa5d6c2ba158b7748e6b Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0501 22:06:59.514222 325086 start.go:368] acquired machines lock for "minikube-m02" in 25.31ยตs I0501 22:06:59.514230 325086 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:06:59.514273 325086 start.go:125] createHost starting for "m02" (driver="docker") I0501 22:06:59.515408 325086 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... I0501 22:06:59.515459 325086 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0501 22:06:59.515467 325086 client.go:168] LocalClient.Create starting I0501 22:06:59.515489 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/ca.pem I0501 22:06:59.515502 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:06:59.515510 325086 main.go:141] libmachine: Parsing certificate... I0501 22:06:59.515540 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/cert.pem I0501 22:06:59.515547 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:06:59.515551 325086 main.go:141] libmachine: Parsing certificate... I0501 22:06:59.515672 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:06:59.523366 325086 network_create.go:76] Found existing network {name:minikube subnet:0xc0010c8f60 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500} I0501 22:06:59.523377 325086 kic.go:117] calculated static IP "192.168.49.3" for the "minikube-m02" container I0501 22:06:59.523423 325086 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0501 22:06:59.531664 325086 cli_runner.go:164] Run: docker volume create minikube-m02 --label name.minikube.sigs.k8s.io=minikube-m02 --label created_by.minikube.sigs.k8s.io=true I0501 22:06:59.538877 325086 oci.go:103] Successfully created a docker volume minikube-m02 I0501 22:06:59.538909 325086 cli_runner.go:164] Run: docker run --rm --name minikube-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m02 --entrypoint /usr/bin/test -v minikube-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -d /var/lib I0501 22:07:00.008349 325086 oci.go:107] Successfully prepared a docker volume minikube-m02 I0501 22:07:00.008369 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:07:00.008381 325086 kic.go:190] Starting extracting preloaded images to volume ... I0501 22:07:00.008430 325086 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir I0501 22:07:01.983512 325086 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir: (1.975059816s) I0501 22:07:01.983529 325086 kic.go:199] duration metric: took 1.975145 seconds to extract preloaded images to volume W0501 22:07:01.983582 325086 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0501 22:07:01.983598 325086 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0501 22:07:01.983634 325086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0501 22:07:02.006545 325086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube-m02 --name minikube-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube-m02 --network minikube --ip 192.168.49.3 --volume minikube-m02:/var --security-opt apparmor=unconfined --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 I0501 22:07:02.303363 325086 cli_runner.go:164] Run: docker container inspect minikube-m02 --format={{.State.Running}} I0501 22:07:02.311811 325086 cli_runner.go:164] Run: docker container inspect minikube-m02 --format={{.State.Status}} I0501 22:07:02.319144 325086 cli_runner.go:164] Run: docker exec minikube-m02 stat /var/lib/dpkg/alternatives/iptables I0501 22:07:02.379068 325086 oci.go:144] the created container "minikube-m02" has a running status. I0501 22:07:02.379080 325086 kic.go:221] Creating ssh key for kic: /home/shotler/.minikube/machines/minikube-m02/id_rsa... I0501 22:07:02.461617 325086 kic_runner.go:191] docker (temp): /home/shotler/.minikube/machines/minikube-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0501 22:07:02.496577 325086 cli_runner.go:164] Run: docker container inspect minikube-m02 --format={{.State.Status}} I0501 22:07:02.506286 325086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0501 22:07:02.506301 325086 kic_runner.go:114] Args: [docker exec --privileged minikube-m02 chown docker:docker /home/docker/.ssh/authorized_keys] I0501 22:07:02.571309 325086 cli_runner.go:164] Run: docker container inspect minikube-m02 --format={{.State.Status}} I0501 22:07:02.579879 325086 machine.go:88] provisioning docker machine ... I0501 22:07:02.579893 325086 ubuntu.go:169] provisioning hostname "minikube-m02" I0501 22:07:02.579932 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:02.588586 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:02.589011 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32878 } I0501 22:07:02.589019 325086 main.go:141] libmachine: About to run SSH command: sudo hostname minikube-m02 && echo "minikube-m02" | sudo tee /etc/hostname I0501 22:07:02.589426 325086 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58118->127.0.0.1:32878: read: connection reset by peer I0501 22:07:05.688131 325086 main.go:141] libmachine: SSH cmd err, output: : minikube-m02 I0501 22:07:05.688182 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:05.696605 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:05.696865 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32878 } I0501 22:07:05.696873 325086 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube-m02' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube-m02/g' /etc/hosts; else echo '127.0.1.1 minikube-m02' | sudo tee -a /etc/hosts; fi fi I0501 22:07:05.791726 325086 main.go:141] libmachine: SSH cmd err, output: : I0501 22:07:05.791737 325086 ubuntu.go:175] set auth options {CertDir:/home/shotler/.minikube CaCertPath:/home/shotler/.minikube/certs/ca.pem CaPrivateKeyPath:/home/shotler/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/shotler/.minikube/machines/server.pem ServerKeyPath:/home/shotler/.minikube/machines/server-key.pem ClientKeyPath:/home/shotler/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/shotler/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/shotler/.minikube} I0501 22:07:05.791745 325086 ubuntu.go:177] setting up certificates I0501 22:07:05.791750 325086 provision.go:83] configureAuth start I0501 22:07:05.791797 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02 I0501 22:07:05.799726 325086 provision.go:138] copyHostCerts I0501 22:07:05.799743 325086 exec_runner.go:144] found /home/shotler/.minikube/key.pem, removing ... I0501 22:07:05.799745 325086 exec_runner.go:207] rm: /home/shotler/.minikube/key.pem I0501 22:07:05.799790 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/key.pem --> /home/shotler/.minikube/key.pem (1679 bytes) I0501 22:07:05.799835 325086 exec_runner.go:144] found /home/shotler/.minikube/ca.pem, removing ... I0501 22:07:05.799837 325086 exec_runner.go:207] rm: /home/shotler/.minikube/ca.pem I0501 22:07:05.799854 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/ca.pem --> /home/shotler/.minikube/ca.pem (1078 bytes) I0501 22:07:05.799883 325086 exec_runner.go:144] found /home/shotler/.minikube/cert.pem, removing ... I0501 22:07:05.799885 325086 exec_runner.go:207] rm: /home/shotler/.minikube/cert.pem I0501 22:07:05.799900 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/cert.pem --> /home/shotler/.minikube/cert.pem (1123 bytes) I0501 22:07:05.799925 325086 provision.go:112] generating server cert: /home/shotler/.minikube/machines/server.pem ca-key=/home/shotler/.minikube/certs/ca.pem private-key=/home/shotler/.minikube/certs/ca-key.pem org=shotler.minikube-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube minikube-m02] I0501 22:07:05.898418 325086 provision.go:172] copyRemoteCerts I0501 22:07:05.898454 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0501 22:07:05.898480 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:05.907241 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m02/id_rsa Username:docker} I0501 22:07:05.981804 325086 ssh_runner.go:362] scp /home/shotler/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0501 22:07:05.991258 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes) I0501 22:07:06.000431 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0501 22:07:06.009463 325086 provision.go:86] duration metric: configureAuth took 217.708ms I0501 22:07:06.009469 325086 ubuntu.go:193] setting minikube options for container-runtime I0501 22:07:06.009567 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:06.009606 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:06.017997 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:06.018263 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32878 } I0501 22:07:06.018268 325086 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0501 22:07:06.112239 325086 main.go:141] libmachine: SSH cmd err, output: : overlay I0501 22:07:06.112247 325086 ubuntu.go:71] root file system type: overlay I0501 22:07:06.112317 325086 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0501 22:07:06.112360 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:06.121491 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:06.121705 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32878 } I0501 22:07:06.121744 325086 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment="NO_PROXY=192.168.49.2" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0501 22:07:06.221504 325086 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment=NO_PROXY=192.168.49.2 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0501 22:07:06.221568 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:06.230588 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:06.230878 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32878 } I0501 22:07:06.230887 325086 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0501 22:07:07.203213 325086 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-03-27 16:16:18.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-05-02 02:07:06.214245374 +0000 @@ -1,30 +1,33 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Environment=NO_PROXY=192.168.49.2 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +35,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0501 22:07:07.203225 325086 machine.go:91] provisioned docker machine in 4.62334016s I0501 22:07:07.203231 325086 client.go:171] LocalClient.Create took 7.687762023s I0501 22:07:07.203241 325086 start.go:167] duration metric: libmachine.API.Create for "minikube" took 7.687780973s I0501 22:07:07.203245 325086 start.go:300] post-start starting for "minikube-m02" (driver="docker") I0501 22:07:07.203248 325086 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0501 22:07:07.203294 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0501 22:07:07.203323 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:07.212671 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m02/id_rsa Username:docker} I0501 22:07:07.281795 325086 ssh_runner.go:195] Run: cat /etc/os-release I0501 22:07:07.283044 325086 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0501 22:07:07.283053 325086 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0501 22:07:07.283058 325086 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0501 22:07:07.283061 325086 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0501 22:07:07.283065 325086 filesync.go:126] Scanning /home/shotler/.minikube/addons for local assets ... I0501 22:07:07.283091 325086 filesync.go:126] Scanning /home/shotler/.minikube/files for local assets ... I0501 22:07:07.283101 325086 start.go:303] post-start completed in 79.853472ms I0501 22:07:07.283300 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02 I0501 22:07:07.291274 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:07.291397 325086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0501 22:07:07.291420 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:07.298571 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m02/id_rsa Username:docker} I0501 22:07:07.368321 325086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0501 22:07:07.370095 325086 start.go:128] duration metric: createHost completed in 7.855818294s I0501 22:07:07.370099 325086 start.go:83] releasing machines lock for "minikube-m02", held for 7.855873475s I0501 22:07:07.370129 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m02 I0501 22:07:07.378704 325086 out.go:177] ๐ŸŒ Found network options: I0501 22:07:07.379334 325086 out.go:177] โ–ช NO_PROXY=192.168.49.2 W0501 22:07:07.379827 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:07.379837 325086 proxy.go:119] fail to check proxy env: Error ip not in block I0501 22:07:07.379868 325086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0501 22:07:07.379885 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:07.379929 325086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0501 22:07:07.379962 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m02 I0501 22:07:07.388482 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m02/id_rsa Username:docker} I0501 22:07:07.388835 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32878 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m02/id_rsa Username:docker} I0501 22:07:07.456240 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0501 22:07:07.741214 325086 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0501 22:07:07.741263 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0501 22:07:07.749221 325086 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0501 22:07:07.749230 325086 start.go:481] detecting cgroup driver to use... I0501 22:07:07.749252 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:07:07.749316 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:07:07.756419 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0501 22:07:07.760446 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0501 22:07:07.764324 325086 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0501 22:07:07.764346 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0501 22:07:07.768409 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:07:07.772259 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0501 22:07:07.775869 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:07:07.779740 325086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0501 22:07:07.783369 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0501 22:07:07.787104 325086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0501 22:07:07.790349 325086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0501 22:07:07.793540 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:07.862089 325086 ssh_runner.go:195] Run: sudo systemctl restart containerd I0501 22:07:07.896265 325086 start.go:481] detecting cgroup driver to use... I0501 22:07:07.896286 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:07:07.896319 325086 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0501 22:07:07.901776 325086 cruntime.go:276] skipping containerd shutdown because we are bound to it I0501 22:07:07.901806 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0501 22:07:07.907115 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:07:07.914236 325086 ssh_runner.go:195] Run: which cri-dockerd I0501 22:07:07.915526 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0501 22:07:07.919427 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0501 22:07:07.926766 325086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0501 22:07:07.980956 325086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0501 22:07:08.032687 325086 docker.go:538] configuring docker to use "systemd" as cgroup driver... I0501 22:07:08.032698 325086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0501 22:07:08.039353 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:08.084223 325086 ssh_runner.go:195] Run: sudo systemctl restart docker I0501 22:07:09.417300 325086 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.333060821s) I0501 22:07:09.417348 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:07:09.503051 325086 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0501 22:07:09.553822 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:07:09.600902 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:09.644934 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0501 22:07:09.650769 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:09.700982 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0501 22:07:09.731929 325086 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock I0501 22:07:09.731974 325086 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0501 22:07:09.733686 325086 start.go:549] Will wait 60s for crictl version I0501 22:07:09.733720 325086 ssh_runner.go:195] Run: which crictl I0501 22:07:09.735021 325086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0501 22:07:09.750370 325086 start.go:565] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 23.0.2 RuntimeApiVersion: v1alpha2 I0501 22:07:09.750402 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:07:09.761541 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:07:09.772355 325086 out.go:204] ๐Ÿณ Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... I0501 22:07:09.772869 325086 out.go:177] โ–ช env NO_PROXY=192.168.49.2 I0501 22:07:09.773383 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:07:09.782063 325086 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0501 22:07:09.783739 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:07:09.788567 325086 certs.go:56] Setting up /home/shotler/.minikube/profiles/minikube for IP: 192.168.49.3 I0501 22:07:09.788576 325086 certs.go:186] acquiring lock for shared ca certs: {Name:mk43a023f6ece43e69e883f266f2820beecb179f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:07:09.788642 325086 certs.go:195] skipping minikubeCA CA generation: /home/shotler/.minikube/ca.key I0501 22:07:09.788662 325086 certs.go:195] skipping proxyClientCA CA generation: /home/shotler/.minikube/proxy-client-ca.key I0501 22:07:09.788703 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca-key.pem (1679 bytes) I0501 22:07:09.788716 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca.pem (1078 bytes) I0501 22:07:09.788728 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/cert.pem (1123 bytes) I0501 22:07:09.788740 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/key.pem (1679 bytes) I0501 22:07:09.788936 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0501 22:07:09.798655 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0501 22:07:09.807309 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0501 22:07:09.815729 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0501 22:07:09.824636 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0501 22:07:09.833300 325086 ssh_runner.go:195] Run: openssl version I0501 22:07:09.835663 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0501 22:07:09.839513 325086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:09.841014 325086 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jun 6 2022 /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:09.841041 325086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:09.843285 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0501 22:07:09.846944 325086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0501 22:07:09.858903 325086 cni.go:84] Creating CNI manager for "" I0501 22:07:09.858906 325086 cni.go:136] 2 nodes found, recommending kindnet I0501 22:07:09.858911 325086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0501 22:07:09.858921 325086 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0501 22:07:09.858985 325086 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.3 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube-m02" kubeletExtraArgs: node-ip: 192.168.49.3 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0501 22:07:09.859012 325086 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0501 22:07:09.859038 325086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0501 22:07:09.862576 325086 binaries.go:44] Found k8s binaries, skipping transfer I0501 22:07:09.862602 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system I0501 22:07:09.866049 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes) I0501 22:07:09.872751 325086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0501 22:07:09.879259 325086 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0501 22:07:09.880713 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:07:09.885252 325086 host.go:66] Checking if "minikube" exists ... I0501 22:07:09.885380 325086 start.go:301] JoinCluster: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 22:07:09.885414 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:09.885417 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm token create --print-join-command --ttl=0" I0501 22:07:09.885444 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:07:09.894195 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:07:09.996826 325086 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:09.996843 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token el8ajc.86gcrv78o0g0ohcg --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=minikube-m02" I0501 22:07:17.595991 325086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token el8ajc.86gcrv78o0g0ohcg --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=minikube-m02": (7.599130544s) I0501 22:07:17.596006 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet" I0501 22:07:17.826288 325086 start.go:303] JoinCluster complete in 7.94090359s I0501 22:07:17.826305 325086 cni.go:84] Creating CNI manager for "" I0501 22:07:17.826309 325086 cni.go:136] 2 nodes found, recommending kindnet I0501 22:07:17.826352 325086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0501 22:07:17.828573 325086 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ... I0501 22:07:17.828583 325086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes) I0501 22:07:17.836123 325086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0501 22:07:17.939662 325086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0501 22:07:17.939678 325086 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:17.940391 325086 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0501 22:07:17.940970 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0501 22:07:17.948169 325086 kubeadm.go:578] duration metric: took 8.469187ms to wait for : map[apiserver:true system_pods:true] ... I0501 22:07:17.948197 325086 node_conditions.go:102] verifying NodePressure condition ... I0501 22:07:17.949834 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:17.949841 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:17.949847 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:17.949849 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:17.949851 325086 node_conditions.go:105] duration metric: took 1.650361ms to run NodePressure ... I0501 22:07:17.949856 325086 start.go:228] waiting for startup goroutines ... I0501 22:07:17.949869 325086 start.go:242] writing updated cluster config ... I0501 22:07:17.951984 325086 out.go:177] I0501 22:07:17.952604 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:17.952662 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:17.953350 325086 out.go:177] ๐Ÿ‘ Starting worker node minikube-m03 in cluster minikube I0501 22:07:17.954340 325086 cache.go:120] Beginning downloading kic base image for docker with docker I0501 22:07:17.954841 325086 out.go:177] ๐Ÿšœ Pulling base image ... I0501 22:07:17.955395 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:07:17.955401 325086 cache.go:57] Caching tarball of preloaded images I0501 22:07:17.955421 325086 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon I0501 22:07:17.955446 325086 preload.go:174] Found /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0501 22:07:17.955451 325086 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker I0501 22:07:17.955508 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:17.965643 325086 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon, skipping pull I0501 22:07:17.965652 325086 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 exists in daemon, skipping load I0501 22:07:17.965662 325086 cache.go:193] Successfully downloaded all kic artifacts I0501 22:07:17.965680 325086 start.go:364] acquiring machines lock for minikube-m03: {Name:mk986d215580137bc809d25234b4bf31a738ca0f Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0501 22:07:17.965721 325086 start.go:368] acquired machines lock for "minikube-m03" in 28.84ยตs I0501 22:07:17.965730 325086 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m03 IP: Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:17.965788 325086 start.go:125] createHost starting for "m03" (driver="docker") I0501 22:07:17.967266 325086 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... I0501 22:07:17.967323 325086 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0501 22:07:17.967331 325086 client.go:168] LocalClient.Create starting I0501 22:07:17.967363 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/ca.pem I0501 22:07:17.967377 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:07:17.967384 325086 main.go:141] libmachine: Parsing certificate... I0501 22:07:17.967415 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/cert.pem I0501 22:07:17.967424 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:07:17.967429 325086 main.go:141] libmachine: Parsing certificate... I0501 22:07:17.967552 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:07:17.975430 325086 network_create.go:76] Found existing network {name:minikube subnet:0xc001ba2060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500} I0501 22:07:17.975441 325086 kic.go:117] calculated static IP "192.168.49.4" for the "minikube-m03" container I0501 22:07:17.975483 325086 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0501 22:07:17.983648 325086 cli_runner.go:164] Run: docker volume create minikube-m03 --label name.minikube.sigs.k8s.io=minikube-m03 --label created_by.minikube.sigs.k8s.io=true I0501 22:07:17.991850 325086 oci.go:103] Successfully created a docker volume minikube-m03 I0501 22:07:17.991920 325086 cli_runner.go:164] Run: docker run --rm --name minikube-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m03 --entrypoint /usr/bin/test -v minikube-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -d /var/lib I0501 22:07:18.537234 325086 oci.go:107] Successfully prepared a docker volume minikube-m03 I0501 22:07:18.537260 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:07:18.537279 325086 kic.go:190] Starting extracting preloaded images to volume ... I0501 22:07:18.537340 325086 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir I0501 22:07:20.593191 325086 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir: (2.055826266s) I0501 22:07:20.593206 325086 kic.go:199] duration metric: took 2.055924 seconds to extract preloaded images to volume W0501 22:07:20.593253 325086 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0501 22:07:20.593270 325086 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0501 22:07:20.593301 325086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0501 22:07:20.617759 325086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube-m03 --name minikube-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube-m03 --network minikube --ip 192.168.49.4 --volume minikube-m03:/var --security-opt apparmor=unconfined --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 I0501 22:07:20.908151 325086 cli_runner.go:164] Run: docker container inspect minikube-m03 --format={{.State.Running}} I0501 22:07:20.917305 325086 cli_runner.go:164] Run: docker container inspect minikube-m03 --format={{.State.Status}} I0501 22:07:20.926069 325086 cli_runner.go:164] Run: docker exec minikube-m03 stat /var/lib/dpkg/alternatives/iptables I0501 22:07:20.983116 325086 oci.go:144] the created container "minikube-m03" has a running status. I0501 22:07:20.983127 325086 kic.go:221] Creating ssh key for kic: /home/shotler/.minikube/machines/minikube-m03/id_rsa... I0501 22:07:21.111829 325086 kic_runner.go:191] docker (temp): /home/shotler/.minikube/machines/minikube-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0501 22:07:21.151278 325086 cli_runner.go:164] Run: docker container inspect minikube-m03 --format={{.State.Status}} I0501 22:07:21.160696 325086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0501 22:07:21.160702 325086 kic_runner.go:114] Args: [docker exec --privileged minikube-m03 chown docker:docker /home/docker/.ssh/authorized_keys] I0501 22:07:21.187855 325086 cli_runner.go:164] Run: docker container inspect minikube-m03 --format={{.State.Status}} I0501 22:07:21.198000 325086 machine.go:88] provisioning docker machine ... I0501 22:07:21.198018 325086 ubuntu.go:169] provisioning hostname "minikube-m03" I0501 22:07:21.198102 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:21.207925 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:21.208172 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32883 } I0501 22:07:21.208178 325086 main.go:141] libmachine: About to run SSH command: sudo hostname minikube-m03 && echo "minikube-m03" | sudo tee /etc/hostname I0501 22:07:21.208596 325086 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56868->127.0.0.1:32883: read: connection reset by peer I0501 22:07:24.310544 325086 main.go:141] libmachine: SSH cmd err, output: : minikube-m03 I0501 22:07:24.310604 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:24.319905 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:24.320124 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32883 } I0501 22:07:24.320131 325086 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube-m03' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube-m03/g' /etc/hosts; else echo '127.0.1.1 minikube-m03' | sudo tee -a /etc/hosts; fi fi I0501 22:07:24.416177 325086 main.go:141] libmachine: SSH cmd err, output: : I0501 22:07:24.416190 325086 ubuntu.go:175] set auth options {CertDir:/home/shotler/.minikube CaCertPath:/home/shotler/.minikube/certs/ca.pem CaPrivateKeyPath:/home/shotler/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/shotler/.minikube/machines/server.pem ServerKeyPath:/home/shotler/.minikube/machines/server-key.pem ClientKeyPath:/home/shotler/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/shotler/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/shotler/.minikube} I0501 22:07:24.416201 325086 ubuntu.go:177] setting up certificates I0501 22:07:24.416205 325086 provision.go:83] configureAuth start I0501 22:07:24.416249 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m03 I0501 22:07:24.425310 325086 provision.go:138] copyHostCerts I0501 22:07:24.425333 325086 exec_runner.go:144] found /home/shotler/.minikube/ca.pem, removing ... I0501 22:07:24.425337 325086 exec_runner.go:207] rm: /home/shotler/.minikube/ca.pem I0501 22:07:24.425379 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/ca.pem --> /home/shotler/.minikube/ca.pem (1078 bytes) I0501 22:07:24.425427 325086 exec_runner.go:144] found /home/shotler/.minikube/cert.pem, removing ... I0501 22:07:24.425429 325086 exec_runner.go:207] rm: /home/shotler/.minikube/cert.pem I0501 22:07:24.425448 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/cert.pem --> /home/shotler/.minikube/cert.pem (1123 bytes) I0501 22:07:24.425480 325086 exec_runner.go:144] found /home/shotler/.minikube/key.pem, removing ... I0501 22:07:24.425482 325086 exec_runner.go:207] rm: /home/shotler/.minikube/key.pem I0501 22:07:24.425497 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/key.pem --> /home/shotler/.minikube/key.pem (1679 bytes) I0501 22:07:24.425543 325086 provision.go:112] generating server cert: /home/shotler/.minikube/machines/server.pem ca-key=/home/shotler/.minikube/certs/ca.pem private-key=/home/shotler/.minikube/certs/ca-key.pem org=shotler.minikube-m03 san=[192.168.49.4 127.0.0.1 localhost 127.0.0.1 minikube minikube-m03] I0501 22:07:24.579000 325086 provision.go:172] copyRemoteCerts I0501 22:07:24.579038 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0501 22:07:24.579067 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:24.588517 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m03/id_rsa Username:docker} I0501 22:07:24.666071 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes) I0501 22:07:24.675904 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0501 22:07:24.684393 325086 ssh_runner.go:362] scp /home/shotler/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0501 22:07:24.693146 325086 provision.go:86] duration metric: configureAuth took 276.932471ms I0501 22:07:24.693154 325086 ubuntu.go:193] setting minikube options for container-runtime I0501 22:07:24.693287 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:24.693326 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:24.702235 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:24.702506 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32883 } I0501 22:07:24.702511 325086 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0501 22:07:24.800561 325086 main.go:141] libmachine: SSH cmd err, output: : overlay I0501 22:07:24.800569 325086 ubuntu.go:71] root file system type: overlay I0501 22:07:24.800638 325086 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0501 22:07:24.800685 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:24.810123 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:24.810340 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32883 } I0501 22:07:24.810382 325086 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment="NO_PROXY=192.168.49.2" Environment="NO_PROXY=192.168.49.2,192.168.49.3" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0501 22:07:24.913465 325086 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment=NO_PROXY=192.168.49.2 Environment=NO_PROXY=192.168.49.2,192.168.49.3 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0501 22:07:24.913540 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:24.923358 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:24.923677 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32883 } I0501 22:07:24.923689 325086 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0501 22:07:26.127935 325086 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-03-27 16:16:18.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-05-02 02:07:24.906476679 +0000 @@ -1,30 +1,34 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Environment=NO_PROXY=192.168.49.2 +Environment=NO_PROXY=192.168.49.2,192.168.49.3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +36,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0501 22:07:26.127948 325086 machine.go:91] provisioned docker machine in 4.929941741s I0501 22:07:26.127954 325086 client.go:171] LocalClient.Create took 8.160620261s I0501 22:07:26.127967 325086 start.go:167] duration metric: libmachine.API.Create for "minikube" took 8.160641331s I0501 22:07:26.127972 325086 start.go:300] post-start starting for "minikube-m03" (driver="docker") I0501 22:07:26.127975 325086 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0501 22:07:26.128022 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0501 22:07:26.128058 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:26.137827 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m03/id_rsa Username:docker} I0501 22:07:26.210179 325086 ssh_runner.go:195] Run: cat /etc/os-release I0501 22:07:26.211407 325086 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0501 22:07:26.211415 325086 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0501 22:07:26.211421 325086 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0501 22:07:26.211424 325086 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0501 22:07:26.211429 325086 filesync.go:126] Scanning /home/shotler/.minikube/addons for local assets ... I0501 22:07:26.211460 325086 filesync.go:126] Scanning /home/shotler/.minikube/files for local assets ... I0501 22:07:26.211470 325086 start.go:303] post-start completed in 83.494758ms I0501 22:07:26.211647 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m03 I0501 22:07:26.220503 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:26.220632 325086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0501 22:07:26.220659 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:26.228620 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m03/id_rsa Username:docker} I0501 22:07:26.300422 325086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0501 22:07:26.302503 325086 start.go:128] duration metric: createHost completed in 8.336711331s I0501 22:07:26.302508 325086 start.go:83] releasing machines lock for "minikube-m03", held for 8.336783943s I0501 22:07:26.302541 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m03 I0501 22:07:26.311615 325086 out.go:177] ๐ŸŒ Found network options: I0501 22:07:26.312236 325086 out.go:177] โ–ช NO_PROXY=192.168.49.2,192.168.49.3 W0501 22:07:26.312736 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:26.312743 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:26.312751 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:26.312755 325086 proxy.go:119] fail to check proxy env: Error ip not in block I0501 22:07:26.312791 325086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0501 22:07:26.312812 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:26.312841 325086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0501 22:07:26.312874 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m03 I0501 22:07:26.321601 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m03/id_rsa Username:docker} I0501 22:07:26.321860 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m03/id_rsa Username:docker} I0501 22:07:26.694156 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0501 22:07:26.704926 325086 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0501 22:07:26.704956 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0501 22:07:26.712512 325086 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0501 22:07:26.712518 325086 start.go:481] detecting cgroup driver to use... I0501 22:07:26.712534 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:07:26.712586 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:07:26.719410 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0501 22:07:26.723501 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0501 22:07:26.727231 325086 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0501 22:07:26.727257 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0501 22:07:26.731447 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:07:26.735595 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0501 22:07:26.739647 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:07:26.743798 325086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0501 22:07:26.747398 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0501 22:07:26.751459 325086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0501 22:07:26.754645 325086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0501 22:07:26.757894 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:26.850214 325086 ssh_runner.go:195] Run: sudo systemctl restart containerd I0501 22:07:26.884838 325086 start.go:481] detecting cgroup driver to use... I0501 22:07:26.884863 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:07:26.884908 325086 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0501 22:07:26.890869 325086 cruntime.go:276] skipping containerd shutdown because we are bound to it I0501 22:07:26.890905 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0501 22:07:26.896014 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:07:26.904270 325086 ssh_runner.go:195] Run: which cri-dockerd I0501 22:07:26.906025 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0501 22:07:26.910201 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0501 22:07:26.917415 325086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0501 22:07:26.961209 325086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0501 22:07:27.012775 325086 docker.go:538] configuring docker to use "systemd" as cgroup driver... I0501 22:07:27.012787 325086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0501 22:07:27.020263 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:27.068730 325086 ssh_runner.go:195] Run: sudo systemctl restart docker I0501 22:07:28.491080 325086 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.422334609s) I0501 22:07:28.491121 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:07:28.567191 325086 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0501 22:07:28.613827 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:07:28.660650 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:28.708558 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0501 22:07:28.714663 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:28.757016 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0501 22:07:28.788191 325086 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock I0501 22:07:28.788243 325086 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0501 22:07:28.790038 325086 start.go:549] Will wait 60s for crictl version I0501 22:07:28.790068 325086 ssh_runner.go:195] Run: which crictl I0501 22:07:28.791500 325086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0501 22:07:28.806149 325086 start.go:565] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 23.0.2 RuntimeApiVersion: v1alpha2 I0501 22:07:28.806182 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:07:28.816810 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:07:28.828481 325086 out.go:204] ๐Ÿณ Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... I0501 22:07:28.829002 325086 out.go:177] โ–ช env NO_PROXY=192.168.49.2 I0501 22:07:28.829572 325086 out.go:177] โ–ช env NO_PROXY=192.168.49.2,192.168.49.3 I0501 22:07:28.830088 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:07:28.838407 325086 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0501 22:07:28.840083 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:07:28.844790 325086 certs.go:56] Setting up /home/shotler/.minikube/profiles/minikube for IP: 192.168.49.4 I0501 22:07:28.844799 325086 certs.go:186] acquiring lock for shared ca certs: {Name:mk43a023f6ece43e69e883f266f2820beecb179f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:07:28.844865 325086 certs.go:195] skipping minikubeCA CA generation: /home/shotler/.minikube/ca.key I0501 22:07:28.844884 325086 certs.go:195] skipping proxyClientCA CA generation: /home/shotler/.minikube/proxy-client-ca.key I0501 22:07:28.844927 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca-key.pem (1679 bytes) I0501 22:07:28.844941 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca.pem (1078 bytes) I0501 22:07:28.844953 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/cert.pem (1123 bytes) I0501 22:07:28.844964 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/key.pem (1679 bytes) I0501 22:07:28.845167 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0501 22:07:28.854095 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0501 22:07:28.862510 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0501 22:07:28.871229 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0501 22:07:28.879836 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0501 22:07:28.888629 325086 ssh_runner.go:195] Run: openssl version I0501 22:07:28.890967 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0501 22:07:28.894885 325086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:28.896403 325086 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jun 6 2022 /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:28.896432 325086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:28.898721 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0501 22:07:28.902444 325086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0501 22:07:28.914213 325086 cni.go:84] Creating CNI manager for "" I0501 22:07:28.914217 325086 cni.go:136] 3 nodes found, recommending kindnet I0501 22:07:28.914222 325086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0501 22:07:28.914231 325086 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.4 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.4 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0501 22:07:28.914307 325086 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.4 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube-m03" kubeletExtraArgs: node-ip: 192.168.49.4 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0501 22:07:28.914337 325086 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0501 22:07:28.914364 325086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0501 22:07:28.917861 325086 binaries.go:44] Found k8s binaries, skipping transfer I0501 22:07:28.917890 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system I0501 22:07:28.921116 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes) I0501 22:07:28.927486 325086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0501 22:07:28.934148 325086 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0501 22:07:28.935504 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:07:28.940230 325086 host.go:66] Checking if "minikube" exists ... I0501 22:07:28.940353 325086 start.go:301] JoinCluster: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.49.4 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 22:07:28.940407 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm token create --print-join-command --ttl=0" I0501 22:07:28.940435 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:07:28.940446 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:28.949489 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:07:29.052932 325086 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:29.052949 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zz6lvy.bdg5tildm9mk5mo9 --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=minikube-m03" I0501 22:07:30.768632 325086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zz6lvy.bdg5tildm9mk5mo9 --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=minikube-m03": (1.715671495s) I0501 22:07:30.768642 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet" I0501 22:07:30.958749 325086 start.go:303] JoinCluster complete in 2.018391852s I0501 22:07:30.958759 325086 cni.go:84] Creating CNI manager for "" I0501 22:07:30.958761 325086 cni.go:136] 3 nodes found, recommending kindnet I0501 22:07:30.958796 325086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0501 22:07:30.960491 325086 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ... I0501 22:07:30.960496 325086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes) I0501 22:07:30.967454 325086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0501 22:07:31.056210 325086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0501 22:07:31.056224 325086 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:31.056953 325086 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0501 22:07:31.057478 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0501 22:07:31.063315 325086 kubeadm.go:578] duration metric: took 7.079819ms to wait for : map[apiserver:true system_pods:true] ... I0501 22:07:31.063323 325086 node_conditions.go:102] verifying NodePressure condition ... I0501 22:07:31.064526 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:31.064533 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:31.064537 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:31.064539 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:31.064541 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:31.064542 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:31.064544 325086 node_conditions.go:105] duration metric: took 1.219235ms to run NodePressure ... I0501 22:07:31.064548 325086 start.go:228] waiting for startup goroutines ... I0501 22:07:31.064558 325086 start.go:242] writing updated cluster config ... I0501 22:07:31.065241 325086 out.go:177] I0501 22:07:31.065826 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:31.065870 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:31.066576 325086 out.go:177] ๐Ÿ‘ Starting worker node minikube-m04 in cluster minikube I0501 22:07:31.067485 325086 cache.go:120] Beginning downloading kic base image for docker with docker I0501 22:07:31.068027 325086 out.go:177] ๐Ÿšœ Pulling base image ... I0501 22:07:31.068514 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:07:31.068519 325086 cache.go:57] Caching tarball of preloaded images I0501 22:07:31.068541 325086 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon I0501 22:07:31.068561 325086 preload.go:174] Found /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0501 22:07:31.068566 325086 cache.go:60] Finished verifying existence of preloaded tar for v1.26.3 on docker I0501 22:07:31.068624 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:31.077180 325086 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 in local docker daemon, skipping pull I0501 22:07:31.077186 325086 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 exists in daemon, skipping load I0501 22:07:31.077194 325086 cache.go:193] Successfully downloaded all kic artifacts I0501 22:07:31.077206 325086 start.go:364] acquiring machines lock for minikube-m04: {Name:mk7a1c38bd71ee99f83a0e3ffe59c3260fc7a122 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0501 22:07:31.077233 325086 start.go:368] acquired machines lock for "minikube-m04" in 18.97ยตs I0501 22:07:31.077240 325086 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.49.4 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m04 IP: Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:31.077287 325086 start.go:125] createHost starting for "m04" (driver="docker") I0501 22:07:31.077957 325086 out.go:204] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=2200MB) ... I0501 22:07:31.078001 325086 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0501 22:07:31.078008 325086 client.go:168] LocalClient.Create starting I0501 22:07:31.078045 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/ca.pem I0501 22:07:31.078059 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:07:31.078066 325086 main.go:141] libmachine: Parsing certificate... I0501 22:07:31.078094 325086 main.go:141] libmachine: Reading certificate data from /home/shotler/.minikube/certs/cert.pem I0501 22:07:31.078102 325086 main.go:141] libmachine: Decoding PEM data... I0501 22:07:31.078106 325086 main.go:141] libmachine: Parsing certificate... I0501 22:07:31.078216 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:07:31.087688 325086 network_create.go:76] Found existing network {name:minikube subnet:0xc0019d9c20 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500} I0501 22:07:31.087704 325086 kic.go:117] calculated static IP "192.168.49.5" for the "minikube-m04" container I0501 22:07:31.087759 325086 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0501 22:07:31.096229 325086 cli_runner.go:164] Run: docker volume create minikube-m04 --label name.minikube.sigs.k8s.io=minikube-m04 --label created_by.minikube.sigs.k8s.io=true I0501 22:07:31.103568 325086 oci.go:103] Successfully created a docker volume minikube-m04 I0501 22:07:31.103598 325086 cli_runner.go:164] Run: docker run --rm --name minikube-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m04 --entrypoint /usr/bin/test -v minikube-m04:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -d /var/lib I0501 22:07:31.645077 325086 oci.go:107] Successfully prepared a docker volume minikube-m04 I0501 22:07:31.645100 325086 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker I0501 22:07:31.645112 325086 kic.go:190] Starting extracting preloaded images to volume ... I0501 22:07:31.645164 325086 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m04:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir I0501 22:07:33.555716 325086 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/shotler/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube-m04:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 -I lz4 -xf /preloaded.tar -C /extractDir: (1.910528273s) I0501 22:07:33.555732 325086 kic.go:199] duration metric: took 1.910614 seconds to extract preloaded images to volume W0501 22:07:33.555783 325086 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0501 22:07:33.555797 325086 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0501 22:07:33.555833 325086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0501 22:07:33.579771 325086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube-m04 --name minikube-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube-m04 --network minikube --ip 192.168.49.5 --volume minikube-m04:/var --security-opt apparmor=unconfined --memory=2200mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 I0501 22:07:33.859179 325086 cli_runner.go:164] Run: docker container inspect minikube-m04 --format={{.State.Running}} I0501 22:07:33.867766 325086 cli_runner.go:164] Run: docker container inspect minikube-m04 --format={{.State.Status}} I0501 22:07:33.875472 325086 cli_runner.go:164] Run: docker exec minikube-m04 stat /var/lib/dpkg/alternatives/iptables I0501 22:07:33.948205 325086 oci.go:144] the created container "minikube-m04" has a running status. I0501 22:07:33.948222 325086 kic.go:221] Creating ssh key for kic: /home/shotler/.minikube/machines/minikube-m04/id_rsa... I0501 22:07:34.011684 325086 kic_runner.go:191] docker (temp): /home/shotler/.minikube/machines/minikube-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0501 22:07:34.046012 325086 cli_runner.go:164] Run: docker container inspect minikube-m04 --format={{.State.Status}} I0501 22:07:34.055925 325086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0501 22:07:34.055932 325086 kic_runner.go:114] Args: [docker exec --privileged minikube-m04 chown docker:docker /home/docker/.ssh/authorized_keys] I0501 22:07:34.111628 325086 cli_runner.go:164] Run: docker container inspect minikube-m04 --format={{.State.Status}} I0501 22:07:34.122188 325086 machine.go:88] provisioning docker machine ... I0501 22:07:34.122202 325086 ubuntu.go:169] provisioning hostname "minikube-m04" I0501 22:07:34.122233 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:34.131272 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:34.131577 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32888 } I0501 22:07:34.131585 325086 main.go:141] libmachine: About to run SSH command: sudo hostname minikube-m04 && echo "minikube-m04" | sudo tee /etc/hostname I0501 22:07:34.131968 325086 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58108->127.0.0.1:32888: read: connection reset by peer I0501 22:07:37.245590 325086 main.go:141] libmachine: SSH cmd err, output: : minikube-m04 I0501 22:07:37.245640 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:37.255127 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:37.255345 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32888 } I0501 22:07:37.255352 325086 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube-m04' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube-m04/g' /etc/hosts; else echo '127.0.1.1 minikube-m04' | sudo tee -a /etc/hosts; fi fi I0501 22:07:37.347861 325086 main.go:141] libmachine: SSH cmd err, output: : I0501 22:07:37.347873 325086 ubuntu.go:175] set auth options {CertDir:/home/shotler/.minikube CaCertPath:/home/shotler/.minikube/certs/ca.pem CaPrivateKeyPath:/home/shotler/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/shotler/.minikube/machines/server.pem ServerKeyPath:/home/shotler/.minikube/machines/server-key.pem ClientKeyPath:/home/shotler/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/shotler/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/shotler/.minikube} I0501 22:07:37.347886 325086 ubuntu.go:177] setting up certificates I0501 22:07:37.347894 325086 provision.go:83] configureAuth start I0501 22:07:37.347950 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m04 I0501 22:07:37.356774 325086 provision.go:138] copyHostCerts I0501 22:07:37.356794 325086 exec_runner.go:144] found /home/shotler/.minikube/cert.pem, removing ... I0501 22:07:37.356797 325086 exec_runner.go:207] rm: /home/shotler/.minikube/cert.pem I0501 22:07:37.356839 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/cert.pem --> /home/shotler/.minikube/cert.pem (1123 bytes) I0501 22:07:37.356883 325086 exec_runner.go:144] found /home/shotler/.minikube/key.pem, removing ... I0501 22:07:37.356885 325086 exec_runner.go:207] rm: /home/shotler/.minikube/key.pem I0501 22:07:37.356902 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/key.pem --> /home/shotler/.minikube/key.pem (1679 bytes) I0501 22:07:37.356931 325086 exec_runner.go:144] found /home/shotler/.minikube/ca.pem, removing ... I0501 22:07:37.356933 325086 exec_runner.go:207] rm: /home/shotler/.minikube/ca.pem I0501 22:07:37.356949 325086 exec_runner.go:151] cp: /home/shotler/.minikube/certs/ca.pem --> /home/shotler/.minikube/ca.pem (1078 bytes) I0501 22:07:37.356975 325086 provision.go:112] generating server cert: /home/shotler/.minikube/machines/server.pem ca-key=/home/shotler/.minikube/certs/ca.pem private-key=/home/shotler/.minikube/certs/ca-key.pem org=shotler.minikube-m04 san=[192.168.49.5 127.0.0.1 localhost 127.0.0.1 minikube minikube-m04] I0501 22:07:37.612722 325086 provision.go:172] copyRemoteCerts I0501 22:07:37.612758 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0501 22:07:37.612784 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:37.622235 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m04/id_rsa Username:docker} I0501 22:07:37.693709 325086 ssh_runner.go:362] scp /home/shotler/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0501 22:07:37.703390 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes) I0501 22:07:37.712449 325086 ssh_runner.go:362] scp /home/shotler/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0501 22:07:37.721945 325086 provision.go:86] duration metric: configureAuth took 374.043658ms I0501 22:07:37.721957 325086 ubuntu.go:193] setting minikube options for container-runtime I0501 22:07:37.722091 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:37.722130 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:37.731046 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:37.731310 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32888 } I0501 22:07:37.731315 325086 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0501 22:07:37.828101 325086 main.go:141] libmachine: SSH cmd err, output: : overlay I0501 22:07:37.828108 325086 ubuntu.go:71] root file system type: overlay I0501 22:07:37.828174 325086 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0501 22:07:37.828217 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:37.836376 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:37.836632 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32888 } I0501 22:07:37.836675 325086 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment="NO_PROXY=192.168.49.2" Environment="NO_PROXY=192.168.49.2,192.168.49.3" Environment="NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0501 22:07:37.932747 325086 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure Environment=NO_PROXY=192.168.49.2 Environment=NO_PROXY=192.168.49.2,192.168.49.3 Environment=NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0501 22:07:37.932797 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:37.942213 325086 main.go:141] libmachine: Using SSH client type: native I0501 22:07:37.942438 325086 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x80e3e0] 0x811480 [] 0s} 127.0.0.1 32888 } I0501 22:07:37.942447 325086 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0501 22:07:39.008819 325086 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2023-03-27 16:16:18.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2023-05-02 02:07:37.926636327 +0000 @@ -1,30 +1,35 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 - -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s +Environment=NO_PROXY=192.168.49.2 +Environment=NO_PROXY=192.168.49.2,192.168.49.3 +Environment=NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +37,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0501 22:07:39.008837 325086 machine.go:91] provisioned docker machine in 4.88664022s I0501 22:07:39.008842 325086 client.go:171] LocalClient.Create took 7.93083236s I0501 22:07:39.008852 325086 start.go:167] duration metric: libmachine.API.Create for "minikube" took 7.93084987s I0501 22:07:39.008856 325086 start.go:300] post-start starting for "minikube-m04" (driver="docker") I0501 22:07:39.008859 325086 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0501 22:07:39.008915 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0501 22:07:39.008949 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:39.017641 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m04/id_rsa Username:docker} I0501 22:07:39.090122 325086 ssh_runner.go:195] Run: cat /etc/os-release I0501 22:07:39.091558 325086 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0501 22:07:39.091566 325086 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0501 22:07:39.091572 325086 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0501 22:07:39.091575 325086 info.go:137] Remote host: Ubuntu 20.04.5 LTS I0501 22:07:39.091580 325086 filesync.go:126] Scanning /home/shotler/.minikube/addons for local assets ... I0501 22:07:39.091605 325086 filesync.go:126] Scanning /home/shotler/.minikube/files for local assets ... I0501 22:07:39.091615 325086 start.go:303] post-start completed in 82.756501ms I0501 22:07:39.091798 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m04 I0501 22:07:39.100615 325086 profile.go:148] Saving config to /home/shotler/.minikube/profiles/minikube/config.json ... I0501 22:07:39.100783 325086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0501 22:07:39.100811 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:39.108816 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m04/id_rsa Username:docker} I0501 22:07:39.180255 325086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0501 22:07:39.182067 325086 start.go:128] duration metric: createHost completed in 8.104773337s I0501 22:07:39.182073 325086 start.go:83] releasing machines lock for "minikube-m04", held for 8.104837127s I0501 22:07:39.182119 325086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube-m04 I0501 22:07:39.191375 325086 out.go:177] ๐ŸŒ Found network options: I0501 22:07:39.191986 325086 out.go:177] โ–ช NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4 W0501 22:07:39.192483 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:39.192490 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:39.192494 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:39.192503 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:39.192507 325086 proxy.go:119] fail to check proxy env: Error ip not in block W0501 22:07:39.192511 325086 proxy.go:119] fail to check proxy env: Error ip not in block I0501 22:07:39.192555 325086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0501 22:07:39.192580 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:39.192595 325086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0501 22:07:39.192629 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube-m04 I0501 22:07:39.201777 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m04/id_rsa Username:docker} I0501 22:07:39.201889 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/shotler/.minikube/machines/minikube-m04/id_rsa Username:docker} I0501 22:07:39.572014 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0501 22:07:39.584718 325086 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0501 22:07:39.584763 325086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0501 22:07:39.592631 325086 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0501 22:07:39.592643 325086 start.go:481] detecting cgroup driver to use... I0501 22:07:39.592660 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:07:39.592713 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:07:39.599612 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0501 22:07:39.603911 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0501 22:07:39.608178 325086 containerd.go:145] configuring containerd to use "systemd" as cgroup driver... I0501 22:07:39.608207 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml" I0501 22:07:39.612471 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:07:39.616553 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0501 22:07:39.620772 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0501 22:07:39.624975 325086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0501 22:07:39.631972 325086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0501 22:07:39.638285 325086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0501 22:07:39.642199 325086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0501 22:07:39.646154 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:39.742281 325086 ssh_runner.go:195] Run: sudo systemctl restart containerd I0501 22:07:39.792018 325086 start.go:481] detecting cgroup driver to use... I0501 22:07:39.792043 325086 detect.go:199] detected "systemd" cgroup driver on host os I0501 22:07:39.792087 325086 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0501 22:07:39.797773 325086 cruntime.go:276] skipping containerd shutdown because we are bound to it I0501 22:07:39.797818 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0501 22:07:39.802692 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0501 22:07:39.809885 325086 ssh_runner.go:195] Run: which cri-dockerd I0501 22:07:39.811309 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0501 22:07:39.814697 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0501 22:07:39.825580 325086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0501 22:07:39.918325 325086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0501 22:07:39.972915 325086 docker.go:538] configuring docker to use "systemd" as cgroup driver... I0501 22:07:39.972932 325086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (143 bytes) I0501 22:07:39.980984 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:40.035597 325086 ssh_runner.go:195] Run: sudo systemctl restart docker I0501 22:07:41.554273 325086 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.51865954s) I0501 22:07:41.554321 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:07:41.666639 325086 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0501 22:07:41.722354 325086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0501 22:07:41.761058 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:41.812843 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0501 22:07:41.818805 325086 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0501 22:07:41.894156 325086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0501 22:07:41.926127 325086 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock I0501 22:07:41.926179 325086 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0501 22:07:41.927743 325086 start.go:549] Will wait 60s for crictl version I0501 22:07:41.927770 325086 ssh_runner.go:195] Run: which crictl I0501 22:07:41.929378 325086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0501 22:07:41.943599 325086 start.go:565] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 23.0.2 RuntimeApiVersion: v1alpha2 I0501 22:07:41.943636 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:07:41.954776 325086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0501 22:07:41.966605 325086 out.go:204] ๐Ÿณ Preparing Kubernetes v1.26.3 on Docker 23.0.2 ... I0501 22:07:41.969823 325086 out.go:177] โ–ช env NO_PROXY=192.168.49.2 I0501 22:07:41.971830 325086 out.go:177] โ–ช env NO_PROXY=192.168.49.2,192.168.49.3 I0501 22:07:41.974655 325086 out.go:177] โ–ช env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4 I0501 22:07:41.976902 325086 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0501 22:07:41.985072 325086 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0501 22:07:41.986744 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:07:41.991724 325086 certs.go:56] Setting up /home/shotler/.minikube/profiles/minikube for IP: 192.168.49.5 I0501 22:07:41.991732 325086 certs.go:186] acquiring lock for shared ca certs: {Name:mk43a023f6ece43e69e883f266f2820beecb179f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0501 22:07:41.991794 325086 certs.go:195] skipping minikubeCA CA generation: /home/shotler/.minikube/ca.key I0501 22:07:41.991811 325086 certs.go:195] skipping proxyClientCA CA generation: /home/shotler/.minikube/proxy-client-ca.key I0501 22:07:41.991846 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca-key.pem (1679 bytes) I0501 22:07:41.991858 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/ca.pem (1078 bytes) I0501 22:07:41.991868 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/cert.pem (1123 bytes) I0501 22:07:41.991877 325086 certs.go:401] found cert: /home/shotler/.minikube/certs/home/shotler/.minikube/certs/key.pem (1679 bytes) I0501 22:07:41.992052 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0501 22:07:42.000507 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0501 22:07:42.009495 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0501 22:07:42.018538 325086 ssh_runner.go:362] scp /home/shotler/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0501 22:07:42.027187 325086 ssh_runner.go:362] scp /home/shotler/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0501 22:07:42.035550 325086 ssh_runner.go:195] Run: openssl version I0501 22:07:42.037990 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0501 22:07:42.041582 325086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:42.042931 325086 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jun 6 2022 /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:42.042954 325086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0501 22:07:42.045053 325086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0501 22:07:42.048701 325086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0501 22:07:42.060731 325086 cni.go:84] Creating CNI manager for "" I0501 22:07:42.060736 325086 cni.go:136] 4 nodes found, recommending kindnet I0501 22:07:42.060741 325086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0501 22:07:42.060751 325086 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.5 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube-m04 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.5 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]} I0501 22:07:42.060824 325086 kubeadm.go:177] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.5 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube-m04" kubeletExtraArgs: node-ip: 192.168.49.5 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.26.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0501 22:07:42.060873 325086 kubeadm.go:968] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5 [Install] config: {KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0501 22:07:42.060910 325086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3 I0501 22:07:42.064284 325086 binaries.go:44] Found k8s binaries, skipping transfer I0501 22:07:42.064312 325086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system I0501 22:07:42.067284 325086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes) I0501 22:07:42.073682 325086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0501 22:07:42.080253 325086 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0501 22:07:42.081657 325086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0501 22:07:42.086450 325086 host.go:66] Checking if "minikube" exists ... I0501 22:07:42.086588 325086 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3 I0501 22:07:42.086576 325086 start.go:301] JoinCluster: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.38-1680381266-16207@sha256:426ee3dccdda8a0d40cd86fbdbe440858176d8d4d9c37319b1c702ef226aea93 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.49.4 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/shotler:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} I0501 22:07:42.086627 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm token create --print-join-command --ttl=0" I0501 22:07:42.086656 325086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0501 22:07:42.095086 325086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32873 SSHKeyPath:/home/shotler/.minikube/machines/minikube/id_rsa Username:docker} I0501 22:07:42.239888 325086 start.go:322] trying to join worker node "m04" to cluster: &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:42.239905 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f0yhbq.14r04hmcn6emvjvl --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=minikube-m04" I0501 22:07:43.932159 325086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token f0yhbq.14r04hmcn6emvjvl --discovery-token-ca-cert-hash sha256:81728b40cb9e144c74fd202995a7b968e8e8d9466836d0b2a2055572f24b52ed --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=minikube-m04": (1.692242555s) I0501 22:07:43.932171 325086 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet" I0501 22:07:44.078651 325086 start.go:303] JoinCluster complete in 1.992068667s I0501 22:07:44.078662 325086 cni.go:84] Creating CNI manager for "" I0501 22:07:44.078665 325086 cni.go:136] 4 nodes found, recommending kindnet I0501 22:07:44.078706 325086 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0501 22:07:44.080350 325086 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ... I0501 22:07:44.080355 325086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes) I0501 22:07:44.087291 325086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0501 22:07:44.186327 325086 kapi.go:248] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0501 22:07:44.186345 325086 start.go:223] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0501 22:07:44.187784 325086 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0501 22:07:44.188749 325086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0501 22:07:44.194936 325086 kubeadm.go:578] duration metric: took 8.575178ms to wait for : map[apiserver:true system_pods:true] ... I0501 22:07:44.194944 325086 node_conditions.go:102] verifying NodePressure condition ... I0501 22:07:44.196627 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:44.196635 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:44.196641 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:44.196644 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:44.196647 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:44.196649 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:44.196651 325086 node_conditions.go:122] node storage ephemeral capacity is 702277920Ki I0501 22:07:44.196654 325086 node_conditions.go:123] node cpu capacity is 32 I0501 22:07:44.196656 325086 node_conditions.go:105] duration metric: took 1.709801ms to run NodePressure ... I0501 22:07:44.196662 325086 start.go:228] waiting for startup goroutines ... I0501 22:07:44.196674 325086 start.go:242] writing updated cluster config ... I0501 22:07:44.196863 325086 ssh_runner.go:195] Run: rm -f paused I0501 22:07:44.225170 325086 start.go:557] kubectl: 1.26.3, cluster: 1.26.3 (minor skew: 0) I0501 22:07:44.225882 325086 out.go:177] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Tue 2023-05-02 02:06:43 UTC, end at Tue 2023-05-02 02:10:15 UTC. -- May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068351422Z" level=info msg="[core] [Channel #1] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068360192Z" level=info msg="[core] [Channel #1] Channel authority set to \"localhost\"" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068443493Z" level=info msg="[core] [Channel #1] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"/run/containerd/containerd.sock\",\n \"ServerName\": \"\",\n \"Attributes\": {},\n \"BalancerAttributes\": null,\n \"Type\": 0,\n \"Metadata\": null\n }\n ],\n \"ServiceConfig\": null,\n \"Attributes\": null\n} (resolver returned new addresses)" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068483484Z" level=info msg="[core] [Channel #1] Channel switches to new LB policy \"pick_first\"" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068521334Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel created" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068550535Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068581955Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068590325Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068740077Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.068755117Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069091202Z" level=info msg="[core] [Channel #4] Channel created" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069101762Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069117902Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069126252Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069150362Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n \"Addresses\": [\n {\n \"Addr\": \"/run/containerd/containerd.sock\",\n \"ServerName\": \"\",\n \"Attributes\": {},\n \"BalancerAttributes\": null,\n \"Type\": 0,\n \"Metadata\": null\n }\n ],\n \"ServiceConfig\": null,\n \"Attributes\": null\n} (resolver returned new addresses)" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069170913Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069189243Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069211663Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069235153Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069240294Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069365945Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069380545Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.069602998Z" level=info msg="[graphdriver] trying configured driver: overlay2" May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.121488599Z" level=info msg="Loading containers: start." May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.644209036Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.734954319Z" level=info msg="Loading containers: done." May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.739290535Z" level=info msg="Docker daemon" commit=219f21b graphdriver=overlay2 version=23.0.2 May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.739313686Z" level=info msg="Daemon has completed initialization" May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.745134171Z" level=info msg="[core] [Server #7] Server created" module=grpc May 02 02:06:50 minikube systemd[1]: Started Docker Application Container Engine. May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.749017081Z" level=info msg="API listen on [::]:2376" May 02 02:06:50 minikube dockerd[915]: time="2023-05-02T02:06:50.751148018Z" level=info msg="API listen on /var/run/docker.sock" May 02 02:06:51 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine... May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Start docker client with request timeout 0s" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Hairpin mode is set to hairpin-veth" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Loaded network plugin cni" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Docker cri networking managed by network plugin cni" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Docker Info: &{ID:a679bd64-8c94-41f0-b5be-9245a8529542 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:32 SystemTime:2023-05-02T02:06:51.082037306Z LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:2 NEventsListener:0 KernelVersion:5.19.0-41-generic OperatingSystem:Ubuntu 20.04.5 LTS OSVersion:20.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001d2460 NCPU:32 MemTotal:33557098496 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:23.0.2 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:} runc:{Path:runc Args:[] Shim:}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2806fc1057397dbaeefbea0e4e17bddfbd388f38 Expected:2806fc1057397dbaeefbea0e4e17bddfbd388f38} RuncCommit:{ID:v1.1.5-0-gf19387a Expected:v1.1.5-0-gf19387a} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: DefaultAddressPools:[] Warnings:[]}" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Setting cgroupDriver systemd" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}" May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Starting the GRPC backend for the Docker CRI interface." May 02 02:06:51 minikube cri-dockerd[1148]: time="2023-05-02T02:06:51Z" level=info msg="Start cri-dockerd grpc backend" May 02 02:06:51 minikube systemd[1]: Started CRI Interface for Docker Application Container Engine. May 02 02:06:54 minikube cri-dockerd[1148]: time="2023-05-02T02:06:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bd169c514cc0c5d5596eb91d2e21f2c897ab020daa9ed8b14e1fb98c183e035/resolv.conf as [nameserver 192.168.49.1 options trust-ad ndots:0 edns0]" May 02 02:06:54 minikube cri-dockerd[1148]: time="2023-05-02T02:06:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98b142bcabafa795ae3eefabe168dbeb057a832798f518ceaf86ac54dd1b6b57/resolv.conf as [nameserver 192.168.49.1 options edns0 trust-ad ndots:0]" May 02 02:06:54 minikube cri-dockerd[1148]: time="2023-05-02T02:06:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c1f7c3a54c758d892995cf82fd19dbea80639a35dda4a8e1e87dfbe99d6239f7/resolv.conf as [nameserver 192.168.49.1 options trust-ad ndots:0 edns0]" May 02 02:06:54 minikube cri-dockerd[1148]: time="2023-05-02T02:06:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8971b5bfa703ffae8169dd7e3baaccafd467a63be25eec92426829b79536fcc6/resolv.conf as [nameserver 192.168.49.1 options edns0 trust-ad ndots:0]" May 02 02:07:11 minikube cri-dockerd[1148]: time="2023-05-02T02:07:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5afb0ad951073a881440b3e5d9608cfe3402ab107e7449b9a78a937c500495/resolv.conf as [nameserver 192.168.49.1 options ndots:0 edns0 trust-ad]" May 02 02:07:11 minikube cri-dockerd[1148]: time="2023-05-02T02:07:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5e74b8ce1a305e20e169d6cce1609582deafda02cac207dd41487504e1037b4e/resolv.conf as [nameserver 192.168.49.1 options edns0 trust-ad ndots:0]" May 02 02:07:11 minikube cri-dockerd[1148]: time="2023-05-02T02:07:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d5595553b9acdf44ad44e9e3120f092b4272864c94dc8754b0b2992928584062/resolv.conf as [nameserver 192.168.49.1 options edns0 trust-ad ndots:0]" May 02 02:07:12 minikube cri-dockerd[1148]: time="2023-05-02T02:07:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/480c11f64f158fdf7a5c280667891fb3d265a7d1a8b20219f94241d06f546dad/resolv.conf as [nameserver 192.168.49.1 options ndots:0 edns0 trust-ad]" May 02 02:07:12 minikube cri-dockerd[1148]: time="2023-05-02T02:07:12Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-787d4945fb-p545d_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1" May 02 02:07:13 minikube cri-dockerd[1148]: time="2023-05-02T02:07:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-787d4945fb-p545d_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1" May 02 02:07:17 minikube cri-dockerd[1148]: time="2023-05-02T02:07:17Z" level=info msg="Stop pulling image kindest/kindnetd:v20230330-48f316cd: Status: Downloaded newer image for kindest/kindnetd:v20230330-48f316cd" May 02 02:07:18 minikube cri-dockerd[1148]: time="2023-05-02T02:07:18Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" May 02 02:07:25 minikube dockerd[915]: time="2023-05-02T02:07:25.645352660Z" level=info msg="ignoring event" container=61816aa724ee5088e948392bc082eaefe72b898f25faf3b6fa38ebe053d27186 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 02 02:07:25 minikube dockerd[915]: time="2023-05-02T02:07:25.859602375Z" level=info msg="ignoring event" container=480c11f64f158fdf7a5c280667891fb3d265a7d1a8b20219f94241d06f546dad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 02 02:07:26 minikube cri-dockerd[1148]: time="2023-05-02T02:07:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4e7ec75c9ff345a2417265afd90410a8f06bfb5a0011485228aba10a8dd05e25/resolv.conf as [nameserver 192.168.49.1 options edns0 trust-ad ndots:0]" May 02 02:07:41 minikube dockerd[915]: time="2023-05-02T02:07:41.582624182Z" level=info msg="ignoring event" container=2caf0668640126d3b2fd29dbc216b3c7fb272c5d1069ed5f0b878d888dacb742 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 36a39da679b6b 6e38f40d628db 2 minutes ago Running storage-provisioner 1 ae5afb0ad9510 764053e8d2990 5185b96f0becf 2 minutes ago Running coredns 1 4e7ec75c9ff34 13104796f51e2 kindest/kindnetd@sha256:c19d6362a6a928139820761475a38c24c0cf84d507b9ddf414a078cf627497af 2 minutes ago Running kindnet-cni 0 d5595553b9acd 61816aa724ee5 5185b96f0becf 3 minutes ago Exited coredns 0 480c11f64f158 1a44ca120c6e9 92ed2bec97a63 3 minutes ago Running kube-proxy 0 5e74b8ce1a305 2caf066864012 6e38f40d628db 3 minutes ago Exited storage-provisioner 0 ae5afb0ad9510 87549a5f84c9d 5a79047369329 3 minutes ago Running kube-scheduler 0 8971b5bfa703f ad764a11ecc4d ce8c2293ef09c 3 minutes ago Running kube-controller-manager 0 c1f7c3a54c758 fc050cdd0af4c fce326961ae2d 3 minutes ago Running etcd 0 98b142bcabafa b047e1e72add2 1d9b3cbae03ce 3 minutes ago Running kube-apiserver 0 1bd169c514cc0 * * ==> coredns [61816aa724ee] <== * [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86 CoreDNS-1.9.3 linux/amd64, go1.18.2, 45b0a11 [INFO] plugin/health: Going into lameduck mode for 5s [WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable [INFO] 127.0.0.1:42168 - 35298 "HINFO IN 8239855964166821932.6284800056064882157. udp 57 false 512" - - 0 5.000084725s [ERROR] plugin/errors: 2 8239855964166821932.6284800056064882157. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable [INFO] 127.0.0.1:52228 - 58399 "HINFO IN 8239855964166821932.6284800056064882157. udp 57 false 512" - - 0 5.000056538s [ERROR] plugin/errors: 2 8239855964166821932.6284800056064882157. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable * * ==> coredns [764053e8d299] <== * .:53 [INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86 CoreDNS-1.9.3 linux/amd64, go1.18.2, 45b0a11 [INFO] 127.0.0.1:32786 - 31142 "HINFO IN 4954081318664076127.2409312608279837577. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.158980253s * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=ba4594e7b78814fd52a9376decb9c3d59c133712 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_05_01T22_06_58_0700 minikube.k8s.io/version=v1.30.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 02 May 2023 02:06:55 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Tue, 02 May 2023 02:10:13 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 02 May 2023 02:07:29 +0000 Tue, 02 May 2023 02:06:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 02 May 2023 02:07:29 +0000 Tue, 02 May 2023 02:06:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 02 May 2023 02:07:29 +0000 Tue, 02 May 2023 02:06:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 02 May 2023 02:07:29 +0000 Tue, 02 May 2023 02:06:56 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 Allocatable: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 System Info: Machine ID: 36d3aacc54a5484eb1c230b6c41171d5 System UUID: 6cfe36cc-3adc-4948-82b8-4556ce92381d Boot ID: 8128af70-48d5-4dda-8c89-0bbd1de713c8 Kernel Version: 5.19.0-41-generic OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://23.0.2 Kubelet Version: v1.26.3 Kube-Proxy Version: v1.26.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-787d4945fb-p545d 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3m4s kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3m18s kube-system kindnet-wtln2 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 3m4s kube-system kube-apiserver-minikube 250m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m18s kube-system kube-controller-manager-minikube 200m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m17s kube-system kube-proxy-4t8ss 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m4s kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m18s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m16s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (2%!)(MISSING) 100m (0%!)(MISSING) memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 3m3s kube-proxy Normal NodeHasSufficientMemory 3m22s (x5 over 3m22s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 3m22s (x4 over 3m22s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 3m22s (x4 over 3m22s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 3m22s kubelet Updated Node Allocatable limit across pods Normal Starting 3m17s kubelet Starting kubelet. Normal NodeAllocatableEnforced 3m17s kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 3m17s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 3m17s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 3m17s kubelet Node minikube status is now: NodeHasSufficientPID Normal RegisteredNode 3m5s node-controller Node minikube event: Registered Node minikube in Controller Name: minikube-m02 Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube-m02 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 02 May 2023 02:07:16 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube-m02 AcquireTime: RenewTime: Tue, 02 May 2023 02:10:10 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 02 May 2023 02:07:47 +0000 Tue, 02 May 2023 02:07:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 02 May 2023 02:07:47 +0000 Tue, 02 May 2023 02:07:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 02 May 2023 02:07:47 +0000 Tue, 02 May 2023 02:07:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 02 May 2023 02:07:47 +0000 Tue, 02 May 2023 02:07:17 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.3 Hostname: minikube-m02 Capacity: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 Allocatable: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 System Info: Machine ID: 36d3aacc54a5484eb1c230b6c41171d5 System UUID: 42a081cd-9504-41c4-86fc-d3706a81a90c Boot ID: 8128af70-48d5-4dda-8c89-0bbd1de713c8 Kernel Version: 5.19.0-41-generic OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://23.0.2 Kubelet Version: v1.26.3 Kube-Proxy Version: v1.26.3 PodCIDR: 10.244.1.0/24 PodCIDRs: 10.244.1.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system kindnet-ckpjr 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m59s kube-system kube-proxy-hfmqw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m59s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (0%!)(MISSING) 100m (0%!)(MISSING) memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 2m56s kube-proxy Normal Starting 2m59s kubelet Starting kubelet. Normal NodeHasSufficientMemory 2m59s (x2 over 2m59s) kubelet Node minikube-m02 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m59s (x2 over 2m59s) kubelet Node minikube-m02 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m59s (x2 over 2m59s) kubelet Node minikube-m02 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 2m59s kubelet Updated Node Allocatable limit across pods Normal NodeReady 2m58s kubelet Node minikube-m02 status is now: NodeReady Normal RegisteredNode 2m55s node-controller Node minikube-m02 event: Registered Node minikube-m02 in Controller Name: minikube-m03 Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube-m03 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 02 May 2023 02:07:29 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube-m03 AcquireTime: RenewTime: Tue, 02 May 2023 02:10:12 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 02 May 2023 02:08:00 +0000 Tue, 02 May 2023 02:07:29 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 02 May 2023 02:08:00 +0000 Tue, 02 May 2023 02:07:29 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 02 May 2023 02:08:00 +0000 Tue, 02 May 2023 02:07:29 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 02 May 2023 02:08:00 +0000 Tue, 02 May 2023 02:07:30 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.4 Hostname: minikube-m03 Capacity: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 Allocatable: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 System Info: Machine ID: 36d3aacc54a5484eb1c230b6c41171d5 System UUID: 48e3a968-36c8-4d6a-9f1c-09b0dadb077e Boot ID: 8128af70-48d5-4dda-8c89-0bbd1de713c8 Kernel Version: 5.19.0-41-generic OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://23.0.2 Kubelet Version: v1.26.3 Kube-Proxy Version: v1.26.3 PodCIDR: 10.244.2.0/24 PodCIDRs: 10.244.2.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system kindnet-c24jd 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m46s kube-system kube-proxy-nqv7x 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m46s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (0%!)(MISSING) 100m (0%!)(MISSING) memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 2m43s kube-proxy Normal Starting 2m46s kubelet Starting kubelet. Normal NodeHasSufficientMemory 2m46s (x2 over 2m46s) kubelet Node minikube-m03 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m46s (x2 over 2m46s) kubelet Node minikube-m03 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m46s (x2 over 2m46s) kubelet Node minikube-m03 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 2m46s kubelet Updated Node Allocatable limit across pods Normal RegisteredNode 2m45s node-controller Node minikube-m03 event: Registered Node minikube-m03 in Controller Normal NodeReady 2m45s kubelet Node minikube-m03 status is now: NodeReady Name: minikube-m04 Roles: Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube-m04 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 02 May 2023 02:07:43 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube-m04 AcquireTime: RenewTime: Tue, 02 May 2023 02:10:05 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 02 May 2023 02:08:13 +0000 Tue, 02 May 2023 02:07:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 02 May 2023 02:08:13 +0000 Tue, 02 May 2023 02:07:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 02 May 2023 02:08:13 +0000 Tue, 02 May 2023 02:07:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 02 May 2023 02:08:13 +0000 Tue, 02 May 2023 02:07:43 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.5 Hostname: minikube-m04 Capacity: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 Allocatable: cpu: 32 ephemeral-storage: 702277920Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32770604Ki pods: 110 System Info: Machine ID: 36d3aacc54a5484eb1c230b6c41171d5 System UUID: 1b75529d-9ae8-4a6b-a430-8b477936bfde Boot ID: 8128af70-48d5-4dda-8c89-0bbd1de713c8 Kernel Version: 5.19.0-41-generic OS Image: Ubuntu 20.04.5 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://23.0.2 Kubelet Version: v1.26.3 Kube-Proxy Version: v1.26.3 PodCIDR: 10.244.3.0/24 PodCIDRs: 10.244.3.0/24 Non-terminated Pods: (2 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system kindnet-q5c94 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m32s kube-system kube-proxy-kn4vq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m32s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (0%!)(MISSING) 100m (0%!)(MISSING) memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 2m30s kube-proxy Normal Starting 2m33s kubelet Starting kubelet. Normal NodeHasSufficientMemory 2m33s (x2 over 2m33s) kubelet Node minikube-m04 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m33s (x2 over 2m33s) kubelet Node minikube-m04 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m33s (x2 over 2m33s) kubelet Node minikube-m04 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 2m33s kubelet Updated Node Allocatable limit across pods Normal NodeReady 2m32s kubelet Node minikube-m04 status is now: NodeReady Normal RegisteredNode 2m30s node-controller Node minikube-m04 event: Registered Node minikube-m04 in Controller * * ==> dmesg <== * [May 2 02:04] kauditd_printk_skb: 292 callbacks suppressed [ +5.001195] kauditd_printk_skb: 275 callbacks suppressed [ +5.015544] kauditd_printk_skb: 275 callbacks suppressed [ +5.003838] kauditd_printk_skb: 516 callbacks suppressed [ +5.011104] kauditd_printk_skb: 34 callbacks suppressed [ +5.000762] kauditd_printk_skb: 285 callbacks suppressed [ +5.013542] kauditd_printk_skb: 265 callbacks suppressed [ +5.002320] kauditd_printk_skb: 439 callbacks suppressed [ +5.003930] kauditd_printk_skb: 337 callbacks suppressed [ +5.003935] kauditd_printk_skb: 231 callbacks suppressed [ +5.014744] kauditd_printk_skb: 93 callbacks suppressed [May 2 02:05] kauditd_printk_skb: 275 callbacks suppressed [ +5.003445] kauditd_printk_skb: 522 callbacks suppressed [ +5.005183] kauditd_printk_skb: 295 callbacks suppressed [ +5.010795] kauditd_printk_skb: 8 callbacks suppressed [ +5.003796] kauditd_printk_skb: 537 callbacks suppressed [ +5.004169] kauditd_printk_skb: 282 callbacks suppressed [ +9.995809] kauditd_printk_skb: 291 callbacks suppressed [ +5.003764] kauditd_printk_skb: 487 callbacks suppressed [ +5.004475] kauditd_printk_skb: 332 callbacks suppressed [ +10.011493] kauditd_printk_skb: 291 callbacks suppressed [May 2 02:06] kauditd_printk_skb: 540 callbacks suppressed [ +9.996982] kauditd_printk_skb: 295 callbacks suppressed [ +5.014963] kauditd_printk_skb: 277 callbacks suppressed [ +10.001447] kauditd_printk_skb: 564 callbacks suppressed [ +5.002060] kauditd_printk_skb: 445 callbacks suppressed [ +10.003102] kauditd_printk_skb: 391 callbacks suppressed [ +5.000819] kauditd_printk_skb: 428 callbacks suppressed [ +5.003844] kauditd_printk_skb: 360 callbacks suppressed [ +5.004720] kauditd_printk_skb: 339 callbacks suppressed [May 2 02:07] kauditd_printk_skb: 291 callbacks suppressed [ +5.003405] kauditd_printk_skb: 289 callbacks suppressed [ +5.001742] kauditd_printk_skb: 291 callbacks suppressed [ +5.003692] kauditd_printk_skb: 561 callbacks suppressed [ +10.002024] kauditd_printk_skb: 326 callbacks suppressed [ +5.003648] kauditd_printk_skb: 298 callbacks suppressed [ +5.002042] kauditd_printk_skb: 583 callbacks suppressed [ +9.993391] kauditd_printk_skb: 302 callbacks suppressed [ +5.002407] kauditd_printk_skb: 360 callbacks suppressed [May 2 02:08] kauditd_printk_skb: 475 callbacks suppressed [ +5.000850] kauditd_printk_skb: 275 callbacks suppressed [ +5.003941] kauditd_printk_skb: 522 callbacks suppressed [ +5.010980] kauditd_printk_skb: 28 callbacks suppressed [ +5.000896] kauditd_printk_skb: 275 callbacks suppressed [ +5.003780] kauditd_printk_skb: 386 callbacks suppressed [ +9.999517] kauditd_printk_skb: 449 callbacks suppressed [ +5.000307] kauditd_printk_skb: 361 callbacks suppressed [ +5.004107] kauditd_printk_skb: 454 callbacks suppressed [May 2 02:09] kauditd_printk_skb: 295 callbacks suppressed [ +5.000764] kauditd_printk_skb: 315 callbacks suppressed [ +5.003944] kauditd_printk_skb: 477 callbacks suppressed [ +9.996300] kauditd_printk_skb: 321 callbacks suppressed [ +5.003527] kauditd_printk_skb: 526 callbacks suppressed [ +5.003930] kauditd_printk_skb: 179 callbacks suppressed [ +9.996185] kauditd_printk_skb: 406 callbacks suppressed [ +5.003650] kauditd_printk_skb: 498 callbacks suppressed [ +9.995862] kauditd_printk_skb: 337 callbacks suppressed [May 2 02:10] kauditd_printk_skb: 275 callbacks suppressed [ +5.002899] kauditd_printk_skb: 518 callbacks suppressed [ +5.003988] kauditd_printk_skb: 249 callbacks suppressed * * ==> etcd [fc050cdd0af4] <== * {"level":"info","ts":"2023-05-02T02:06:54.067Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2023-05-02T02:06:54.067Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-05-02T02:06:54.067Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-05-02T02:06:54.067Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2023-05-02T02:06:54.067Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":32,"max-cpu-available":32,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-05-02T02:06:54.068Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"632.248ยตs"} {"level":"info","ts":"2023-05-02T02:06:54.070Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2023-05-02T02:06:54.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2023-05-02T02:06:54.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2023-05-02T02:06:54.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-05-02T02:06:54.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2023-05-02T02:06:54.070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2023-05-02T02:06:54.071Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-05-02T02:06:54.072Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-05-02T02:06:54.072Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-05-02T02:06:54.073Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-05-02T02:06:54.073Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-05-02T02:06:54.073Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-05-02T02:06:54.073Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-05-02T02:06:54.073Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-05-02T02:06:54.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2023-05-02T02:06:54.074Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2023-05-02T02:06:54.074Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2023-05-02T02:06:54.074Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-05-02T02:06:54.074Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2023-05-02T02:06:54.074Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2023-05-02T02:06:54.074Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2023-05-02T02:06:54.770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-05-02T02:06:54.771Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-05-02T02:06:54.772Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2023-05-02T02:06:54.772Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"} * * ==> kernel <== * 02:10:15 up 2:32, 0 users, load average: 0.81, 1.12, 0.87 Linux minikube 5.19.0-41-generic #42~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 18 17:40:00 UTC 2 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.5 LTS" * * ==> kindnet [13104796f51e] <== * I0502 02:08:58.359243 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:08:58.359251 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:08:58.359285 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:08:58.359290 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:09:08.363053 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:09:08.363068 1 main.go:227] handling current node I0502 02:09:08.363075 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:09:08.363079 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:09:08.363154 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:09:08.363160 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:09:08.363193 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:09:08.363200 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:09:18.366547 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:09:18.366563 1 main.go:227] handling current node I0502 02:09:18.366572 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:09:18.366576 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:09:18.366650 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:09:18.366655 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:09:18.366685 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:09:18.366690 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:09:28.369323 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:09:28.369336 1 main.go:227] handling current node I0502 02:09:28.369342 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:09:28.369346 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:09:28.369402 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:09:28.369406 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:09:28.369429 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:09:28.369434 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:09:38.380965 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:09:38.380985 1 main.go:227] handling current node I0502 02:09:38.380997 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:09:38.381003 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:09:38.381090 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:09:38.381097 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:09:38.381132 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:09:38.381139 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:09:48.392463 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:09:48.392479 1 main.go:227] handling current node I0502 02:09:48.392487 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:09:48.392491 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:09:48.392560 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:09:48.392565 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:09:48.392592 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:09:48.392596 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:09:58.401053 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:09:58.401071 1 main.go:227] handling current node I0502 02:09:58.401081 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:09:58.401086 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:09:58.401167 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:09:58.401178 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:09:58.401212 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:09:58.401219 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] I0502 02:10:08.412199 1 main.go:223] Handling node with IPs: map[192.168.49.2:{}] I0502 02:10:08.412211 1 main.go:227] handling current node I0502 02:10:08.412217 1 main.go:223] Handling node with IPs: map[192.168.49.3:{}] I0502 02:10:08.412220 1 main.go:250] Node minikube-m02 has CIDR [10.244.1.0/24] I0502 02:10:08.412276 1 main.go:223] Handling node with IPs: map[192.168.49.4:{}] I0502 02:10:08.412280 1 main.go:250] Node minikube-m03 has CIDR [10.244.2.0/24] I0502 02:10:08.412301 1 main.go:223] Handling node with IPs: map[192.168.49.5:{}] I0502 02:10:08.412304 1 main.go:250] Node minikube-m04 has CIDR [10.244.3.0/24] * * ==> kube-apiserver [b047e1e72add] <== * W0502 02:06:55.226794 1 genericapiserver.go:660] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0502 02:06:55.609790 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0502 02:06:55.609864 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0502 02:06:55.609914 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0502 02:06:55.609994 1 secure_serving.go:210] Serving securely on [::]:8443 I0502 02:06:55.610047 1 controller.go:83] Starting OpenAPI AggregationController I0502 02:06:55.610050 1 autoregister_controller.go:141] Starting autoregister controller I0502 02:06:55.610056 1 available_controller.go:494] Starting AvailableConditionController I0502 02:06:55.610059 1 cache.go:32] Waiting for caches to sync for autoregister controller I0502 02:06:55.610064 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0502 02:06:55.610087 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0502 02:06:55.610090 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0502 02:06:55.610096 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister I0502 02:06:55.610161 1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0502 02:06:55.610281 1 customresource_discovery_controller.go:288] Starting DiscoveryController I0502 02:06:55.610322 1 controller.go:85] Starting OpenAPI controller I0502 02:06:55.610350 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0502 02:06:55.610352 1 establishing_controller.go:76] Starting EstablishingController I0502 02:06:55.610365 1 crd_finalizer.go:266] Starting CRDFinalizer I0502 02:06:55.610375 1 controller.go:85] Starting OpenAPI V3 controller I0502 02:06:55.610351 1 apf_controller.go:361] Starting API Priority and Fairness config controller I0502 02:06:55.610396 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0502 02:06:55.610398 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0502 02:06:55.610404 1 naming_controller.go:291] Starting NamingConditionController I0502 02:06:55.610405 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0502 02:06:55.610499 1 controller.go:121] Starting legacy_token_tracking_controller I0502 02:06:55.610512 1 shared_informer.go:273] Waiting for caches to sync for configmaps I0502 02:06:55.610597 1 gc_controller.go:78] Starting apiserver lease garbage collector I0502 02:06:55.610651 1 controller.go:80] Starting OpenAPI V3 AggregationController I0502 02:06:55.610684 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0502 02:06:55.610688 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0502 02:06:55.610692 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller I0502 02:06:55.610695 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0502 02:06:55.615791 1 controller.go:615] quota admission added evaluator for: namespaces I0502 02:06:55.665904 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0502 02:06:55.710936 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0502 02:06:55.710949 1 apf_controller.go:366] Running API Priority and Fairness config worker I0502 02:06:55.710956 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process I0502 02:06:55.710959 1 shared_informer.go:280] Caches are synced for crd-autoregister I0502 02:06:55.710962 1 cache.go:39] Caches are synced for autoregister controller I0502 02:06:55.710985 1 shared_informer.go:280] Caches are synced for configmaps I0502 02:06:55.710991 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller I0502 02:06:55.711010 1 cache.go:39] Caches are synced for AvailableConditionController controller I0502 02:06:55.712986 1 shared_informer.go:280] Caches are synced for node_authorizer I0502 02:06:56.471778 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0502 02:06:56.612655 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0502 02:06:56.614232 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0502 02:06:56.614238 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0502 02:06:56.794399 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0502 02:06:56.810087 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0502 02:06:56.920763 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0502 02:06:56.924534 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0502 02:06:56.925017 1 controller.go:615] quota admission added evaluator for: endpoints I0502 02:06:56.926942 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0502 02:06:57.624309 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0502 02:06:58.351350 1 controller.go:615] quota admission added evaluator for: deployments.apps I0502 02:06:58.355953 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0502 02:06:58.359654 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0502 02:07:11.129167 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps I0502 02:07:11.228865 1 controller.go:615] quota admission added evaluator for: replicasets.apps * * ==> kube-controller-manager [ad764a11ecc4] <== * I0502 02:07:10.833262 1 shared_informer.go:280] Caches are synced for attach detach I0502 02:07:10.837598 1 shared_informer.go:280] Caches are synced for bootstrap_signer I0502 02:07:10.840496 1 shared_informer.go:280] Caches are synced for expand I0502 02:07:10.851207 1 shared_informer.go:280] Caches are synced for namespace I0502 02:07:10.875467 1 shared_informer.go:280] Caches are synced for service account I0502 02:07:10.875473 1 shared_informer.go:280] Caches are synced for persistent volume I0502 02:07:10.875483 1 shared_informer.go:280] Caches are synced for job I0502 02:07:10.875491 1 shared_informer.go:280] Caches are synced for cronjob I0502 02:07:10.875504 1 shared_informer.go:280] Caches are synced for HPA I0502 02:07:10.875533 1 shared_informer.go:280] Caches are synced for crt configmap I0502 02:07:10.875539 1 shared_informer.go:280] Caches are synced for ClusterRoleAggregator I0502 02:07:10.881285 1 shared_informer.go:280] Caches are synced for node I0502 02:07:10.881302 1 range_allocator.go:167] Sending events to api server. I0502 02:07:10.881316 1 range_allocator.go:171] Starting range CIDR allocator I0502 02:07:10.881320 1 shared_informer.go:273] Waiting for caches to sync for cidrallocator I0502 02:07:10.881325 1 shared_informer.go:280] Caches are synced for cidrallocator I0502 02:07:10.884391 1 range_allocator.go:372] Set node minikube PodCIDR to [10.244.0.0/24] I0502 02:07:10.890820 1 shared_informer.go:280] Caches are synced for ReplicationController I0502 02:07:10.892999 1 shared_informer.go:280] Caches are synced for GC I0502 02:07:10.896196 1 shared_informer.go:280] Caches are synced for daemon sets I0502 02:07:10.897511 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-legacy-unknown I0502 02:07:10.897523 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kube-apiserver-client I0502 02:07:10.897531 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-serving I0502 02:07:10.897539 1 shared_informer.go:280] Caches are synced for certificate-csrsigning-kubelet-client I0502 02:07:10.900632 1 shared_informer.go:280] Caches are synced for deployment I0502 02:07:11.035691 1 shared_informer.go:280] Caches are synced for endpoint I0502 02:07:11.037813 1 shared_informer.go:280] Caches are synced for resource quota I0502 02:07:11.085202 1 shared_informer.go:280] Caches are synced for endpoint_slice I0502 02:07:11.088333 1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring I0502 02:07:11.103524 1 shared_informer.go:280] Caches are synced for resource quota I0502 02:07:11.133015 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wtln2" I0502 02:07:11.133541 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4t8ss" I0502 02:07:11.230298 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 1" I0502 02:07:11.408875 1 shared_informer.go:280] Caches are synced for garbage collector I0502 02:07:11.474230 1 shared_informer.go:280] Caches are synced for garbage collector I0502 02:07:11.474294 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0502 02:07:11.483334 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-p545d" W0502 02:07:16.750125 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m02" does not exist I0502 02:07:16.753264 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hfmqw" I0502 02:07:16.753867 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ckpjr" I0502 02:07:16.755449 1 range_allocator.go:372] Set node minikube-m02 PodCIDR to [10.244.1.0/24] W0502 02:07:17.408962 1 topologycache.go:232] Can't get CPU or zone information for minikube-m02 node W0502 02:07:20.829974 1 node_lifecycle_controller.go:1053] Missing timestamp for Node minikube-m02. Assuming now as a timestamp. I0502 02:07:20.829990 1 event.go:294] "Event occurred" object="minikube-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m02 event: Registered Node minikube-m02 in Controller" W0502 02:07:29.847710 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m03" does not exist W0502 02:07:29.847745 1 topologycache.go:232] Can't get CPU or zone information for minikube-m02 node I0502 02:07:29.850790 1 range_allocator.go:372] Set node minikube-m03 PodCIDR to [10.244.2.0/24] I0502 02:07:29.852353 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c24jd" I0502 02:07:29.852781 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nqv7x" W0502 02:07:30.509727 1 topologycache.go:232] Can't get CPU or zone information for minikube-m02 node W0502 02:07:30.830695 1 node_lifecycle_controller.go:1053] Missing timestamp for Node minikube-m03. Assuming now as a timestamp. I0502 02:07:30.830701 1 event.go:294] "Event occurred" object="minikube-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m03 event: Registered Node minikube-m03 in Controller" W0502 02:07:43.088092 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube-m04" does not exist W0502 02:07:43.088123 1 topologycache.go:232] Can't get CPU or zone information for minikube-m02 node I0502 02:07:43.091343 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kn4vq" I0502 02:07:43.092009 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q5c94" I0502 02:07:43.092100 1 range_allocator.go:372] Set node minikube-m04 PodCIDR to [10.244.3.0/24] W0502 02:07:43.745304 1 topologycache.go:232] Can't get CPU or zone information for minikube-m04 node W0502 02:07:45.833160 1 node_lifecycle_controller.go:1053] Missing timestamp for Node minikube-m04. Assuming now as a timestamp. I0502 02:07:45.833171 1 event.go:294] "Event occurred" object="minikube-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube-m04 event: Registered Node minikube-m04 in Controller" * * ==> kube-proxy [1a44ca120c6e] <== * I0502 02:07:11.588411 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0502 02:07:11.588443 1 server_others.go:109] "Detected node IP" address="192.168.49.2" I0502 02:07:11.588453 1 server_others.go:535] "Using iptables proxy" I0502 02:07:11.596064 1 server_others.go:176] "Using iptables Proxier" I0502 02:07:11.596073 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0502 02:07:11.596078 1 server_others.go:184] "Creating dualStackProxier for iptables" I0502 02:07:11.596085 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0502 02:07:11.596100 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0502 02:07:11.596245 1 server.go:655] "Version info" version="v1.26.3" I0502 02:07:11.596251 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0502 02:07:11.596446 1 config.go:444] "Starting node config controller" I0502 02:07:11.596449 1 config.go:226] "Starting endpoint slice config controller" I0502 02:07:11.596456 1 shared_informer.go:273] Waiting for caches to sync for node config I0502 02:07:11.596458 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config I0502 02:07:11.596471 1 config.go:317] "Starting service config controller" I0502 02:07:11.596484 1 shared_informer.go:273] Waiting for caches to sync for service config I0502 02:07:11.697336 1 shared_informer.go:280] Caches are synced for endpoint slice config I0502 02:07:11.697353 1 shared_informer.go:280] Caches are synced for node config I0502 02:07:11.697332 1 shared_informer.go:280] Caches are synced for service config * * ==> kube-scheduler [87549a5f84c9] <== * I0502 02:06:54.349833 1 serving.go:348] Generated self-signed cert in-memory W0502 02:06:55.616875 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0502 02:06:55.616905 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0502 02:06:55.616919 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous. W0502 02:06:55.616926 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0502 02:06:55.649807 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.3" I0502 02:06:55.649816 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0502 02:06:55.650359 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259 I0502 02:06:55.650381 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0502 02:06:55.650400 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0502 02:06:55.650413 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file W0502 02:06:55.651543 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0502 02:06:55.651565 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0502 02:06:55.651563 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0502 02:06:55.651574 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0502 02:06:55.651582 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0502 02:06:55.651592 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0502 02:06:55.651593 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0502 02:06:55.651598 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0502 02:06:55.651769 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0502 02:06:55.651775 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0502 02:06:55.651774 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0502 02:06:55.651795 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0502 02:06:55.651894 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0502 02:06:55.651906 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0502 02:06:55.651976 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0502 02:06:55.651984 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0502 02:06:55.652019 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0502 02:06:55.652020 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0502 02:06:55.652025 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0502 02:06:55.652030 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0502 02:06:55.652022 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0502 02:06:55.652039 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0502 02:06:55.652039 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0502 02:06:55.652046 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0502 02:06:55.652050 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0502 02:06:55.652053 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0502 02:06:55.652046 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0502 02:06:55.652060 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0502 02:06:55.652069 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0502 02:06:55.652071 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0502 02:06:56.475354 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0502 02:06:56.475375 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0502 02:06:56.485000 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0502 02:06:56.485012 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0502 02:06:56.581941 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0502 02:06:56.581966 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0502 02:06:56.640477 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0502 02:06:56.640494 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0502 02:06:56.649953 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0502 02:06:56.649968 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0502 02:06:56.661486 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0502 02:06:56.661497 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0502 02:06:56.713050 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0502 02:06:56.713065 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0502 02:06:57.150935 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Tue 2023-05-02 02:06:43 UTC, end at Tue 2023-05-02 02:10:15 UTC. -- May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.503418 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.503465 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.503488 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.503503 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:06:58 minikube kubelet[2503]: E0502 02:06:58.583968 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587203 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdcbce216c62c4407ac9a51ac013e7d7-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"cdcbce216c62c4407ac9a51ac013e7d7\") " pod="kube-system/kube-apiserver-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587226 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdcbce216c62c4407ac9a51ac013e7d7-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cdcbce216c62c4407ac9a51ac013e7d7\") " pod="kube-system/kube-apiserver-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587241 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587256 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587301 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a121e106627e5c6efa9ba48006cc43bf-etcd-certs\") pod \"etcd-minikube\" (UID: \"a121e106627e5c6efa9ba48006cc43bf\") " pod="kube-system/etcd-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587326 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587341 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0818f4b1a57de9c3f9c82667e7fcc870-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"0818f4b1a57de9c3f9c82667e7fcc870\") " pod="kube-system/kube-scheduler-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587358 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdcbce216c62c4407ac9a51ac013e7d7-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"cdcbce216c62c4407ac9a51ac013e7d7\") " pod="kube-system/kube-apiserver-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587372 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdcbce216c62c4407ac9a51ac013e7d7-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cdcbce216c62c4407ac9a51ac013e7d7\") " pod="kube-system/kube-apiserver-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587453 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587492 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587518 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587542 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdcbce216c62c4407ac9a51ac013e7d7-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cdcbce216c62c4407ac9a51ac013e7d7\") " pod="kube-system/kube-apiserver-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587584 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/466b9e73e627277a8c24637c2fa6442d-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"466b9e73e627277a8c24637c2fa6442d\") " pod="kube-system/kube-controller-manager-minikube" May 02 02:06:58 minikube kubelet[2503]: I0502 02:06:58.587607 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a121e106627e5c6efa9ba48006cc43bf-etcd-data\") pod \"etcd-minikube\" (UID: \"a121e106627e5c6efa9ba48006cc43bf\") " pod="kube-system/etcd-minikube" May 02 02:06:58 minikube kubelet[2503]: E0502 02:06:58.784431 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" May 02 02:06:58 minikube kubelet[2503]: E0502 02:06:58.984074 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" May 02 02:06:59 minikube kubelet[2503]: I0502 02:06:59.381426 2503 apiserver.go:52] "Watching apiserver" May 02 02:06:59 minikube kubelet[2503]: I0502 02:06:59.585998 2503 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" May 02 02:06:59 minikube kubelet[2503]: I0502 02:06:59.593717 2503 reconciler.go:41] "Reconciler: start to sync state" May 02 02:06:59 minikube kubelet[2503]: E0502 02:06:59.983853 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" May 02 02:07:00 minikube kubelet[2503]: E0502 02:07:00.184083 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" May 02 02:07:00 minikube kubelet[2503]: E0502 02:07:00.384246 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" May 02 02:07:00 minikube kubelet[2503]: I0502 02:07:00.581748 2503 request.go:690] Waited for 1.168155188s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods May 02 02:07:00 minikube kubelet[2503]: E0502 02:07:00.584735 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" May 02 02:07:00 minikube kubelet[2503]: I0502 02:07:00.786564 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-minikube" podStartSLOduration=3.786525991 pod.CreationTimestamp="2023-05-02 02:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:00.78643056 +0000 UTC m=+2.452270656" watchObservedRunningTime="2023-05-02 02:07:00.786525991 +0000 UTC m=+2.452366097" May 02 02:07:01 minikube kubelet[2503]: I0502 02:07:01.187235 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-minikube" podStartSLOduration=4.187201079 pod.CreationTimestamp="2023-05-02 02:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:01.187062257 +0000 UTC m=+2.852902363" watchObservedRunningTime="2023-05-02 02:07:01.187201079 +0000 UTC m=+2.853041185" May 02 02:07:01 minikube kubelet[2503]: I0502 02:07:01.607846 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-minikube" podStartSLOduration=3.607814452 pod.CreationTimestamp="2023-05-02 02:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:01.607799662 +0000 UTC m=+3.273639768" watchObservedRunningTime="2023-05-02 02:07:01.607814452 +0000 UTC m=+3.273654558" May 02 02:07:02 minikube kubelet[2503]: I0502 02:07:02.385873 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-minikube" podStartSLOduration=5.385847892 pod.CreationTimestamp="2023-05-02 02:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:02.385840232 +0000 UTC m=+4.051680328" watchObservedRunningTime="2023-05-02 02:07:02.385847892 +0000 UTC m=+4.051687988" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.009953 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.045730 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c9d59bc4-0a57-4dd3-8eb9-d78c85cb633a-tmp\") pod \"storage-provisioner\" (UID: \"c9d59bc4-0a57-4dd3-8eb9-d78c85cb633a\") " pod="kube-system/storage-provisioner" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.045761 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rppj\" (UniqueName: \"kubernetes.io/projected/c9d59bc4-0a57-4dd3-8eb9-d78c85cb633a-kube-api-access-5rppj\") pod \"storage-provisioner\" (UID: \"c9d59bc4-0a57-4dd3-8eb9-d78c85cb633a\") " pod="kube-system/storage-provisioner" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.135131 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.135553 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.145839 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e5b656dd-d583-4c80-8637-a4cb80c68560-cni-cfg\") pod \"kindnet-wtln2\" (UID: \"e5b656dd-d583-4c80-8637-a4cb80c68560\") " pod="kube-system/kindnet-wtln2" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.145862 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f440e352-a283-489e-a574-a00c012a3df7-kube-proxy\") pod \"kube-proxy-4t8ss\" (UID: \"f440e352-a283-489e-a574-a00c012a3df7\") " pod="kube-system/kube-proxy-4t8ss" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.145883 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f440e352-a283-489e-a574-a00c012a3df7-xtables-lock\") pod \"kube-proxy-4t8ss\" (UID: \"f440e352-a283-489e-a574-a00c012a3df7\") " pod="kube-system/kube-proxy-4t8ss" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.145988 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgl72\" (UniqueName: \"kubernetes.io/projected/f440e352-a283-489e-a574-a00c012a3df7-kube-api-access-zgl72\") pod \"kube-proxy-4t8ss\" (UID: \"f440e352-a283-489e-a574-a00c012a3df7\") " pod="kube-system/kube-proxy-4t8ss" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.146018 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5b656dd-d583-4c80-8637-a4cb80c68560-xtables-lock\") pod \"kindnet-wtln2\" (UID: \"e5b656dd-d583-4c80-8637-a4cb80c68560\") " pod="kube-system/kindnet-wtln2" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.146041 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5b656dd-d583-4c80-8637-a4cb80c68560-lib-modules\") pod \"kindnet-wtln2\" (UID: \"e5b656dd-d583-4c80-8637-a4cb80c68560\") " pod="kube-system/kindnet-wtln2" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.146056 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdbl\" (UniqueName: \"kubernetes.io/projected/e5b656dd-d583-4c80-8637-a4cb80c68560-kube-api-access-bgdbl\") pod \"kindnet-wtln2\" (UID: \"e5b656dd-d583-4c80-8637-a4cb80c68560\") " pod="kube-system/kindnet-wtln2" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.146090 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f440e352-a283-489e-a574-a00c012a3df7-lib-modules\") pod \"kube-proxy-4t8ss\" (UID: \"f440e352-a283-489e-a574-a00c012a3df7\") " pod="kube-system/kube-proxy-4t8ss" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.485170 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5afb0ad951073a881440b3e5d9608cfe3402ab107e7449b9a78a937c500495" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.493977 2503 topology_manager.go:210] "Topology Admit Handler" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.548480 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/678add87-f648-4650-8318-f5e6cd4750e0-config-volume\") pod \"coredns-787d4945fb-p545d\" (UID: \"678add87-f648-4650-8318-f5e6cd4750e0\") " pod="kube-system/coredns-787d4945fb-p545d" May 02 02:07:11 minikube kubelet[2503]: I0502 02:07:11.548508 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclls\" (UniqueName: \"kubernetes.io/projected/678add87-f648-4650-8318-f5e6cd4750e0-kube-api-access-wclls\") pod \"coredns-787d4945fb-p545d\" (UID: \"678add87-f648-4650-8318-f5e6cd4750e0\") " pod="kube-system/coredns-787d4945fb-p545d" May 02 02:07:12 minikube kubelet[2503]: I0502 02:07:12.536379 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="480c11f64f158fdf7a5c280667891fb3d265a7d1a8b20219f94241d06f546dad" May 02 02:07:14 minikube kubelet[2503]: I0502 02:07:14.015271 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4t8ss" podStartSLOduration=3.015238447 pod.CreationTimestamp="2023-05-02 02:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:14.015093626 +0000 UTC m=+15.680933732" watchObservedRunningTime="2023-05-02 02:07:14.015238447 +0000 UTC m=+15.681078543" May 02 02:07:14 minikube kubelet[2503]: I0502 02:07:14.815614 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.815588772 pod.CreationTimestamp="2023-05-02 02:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:14.414842242 +0000 UTC m=+16.080682348" watchObservedRunningTime="2023-05-02 02:07:14.815588772 +0000 UTC m=+16.481428868" May 02 02:07:18 minikube kubelet[2503]: I0502 02:07:18.580348 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-wtln2" podStartSLOduration=-9.223372029274456e+09 pod.CreationTimestamp="2023-05-02 02:07:11 +0000 UTC" firstStartedPulling="2023-05-02 02:07:11.633966027 +0000 UTC m=+13.299806133" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:18.58031552 +0000 UTC m=+20.246155616" watchObservedRunningTime="2023-05-02 02:07:18.580319 +0000 UTC m=+20.246159096" May 02 02:07:18 minikube kubelet[2503]: I0502 02:07:18.580466 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-p545d" podStartSLOduration=7.580453942 pod.CreationTimestamp="2023-05-02 02:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-02 02:07:14.81546094 +0000 UTC m=+16.481301046" watchObservedRunningTime="2023-05-02 02:07:18.580453942 +0000 UTC m=+20.246294048" May 02 02:07:18 minikube kubelet[2503]: I0502 02:07:18.953611 2503 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" May 02 02:07:18 minikube kubelet[2503]: I0502 02:07:18.954267 2503 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" May 02 02:07:26 minikube kubelet[2503]: I0502 02:07:26.604849 2503 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="480c11f64f158fdf7a5c280667891fb3d265a7d1a8b20219f94241d06f546dad" May 02 02:07:41 minikube kubelet[2503]: I0502 02:07:41.657078 2503 scope.go:115] "RemoveContainer" containerID="2caf0668640126d3b2fd29dbc216b3c7fb272c5d1069ed5f0b878d888dacb742"