Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help me minikube start failed #16073

Closed
shuxingxin opened this issue Mar 17, 2023 · 6 comments
Closed

Help me minikube start failed #16073

shuxingxin opened this issue Mar 17, 2023 · 6 comments
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@shuxingxin
Copy link

重现问题所需的命令
minikube start
失败的命令的完整输出


[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0317 03:58:47.233718 2840 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0317 12:02:48.573148 8172 out.go:239] * 建议:检查 'journalctl -xeu kubelet' 的输出,尝试启动 minikube 时添加参数 --extra-config=kubelet.cgroup-driver=systemd
W0317 12:02:48.573148 8172 out.go:239] * Related issue: #4172
I0317 12:02:48.578148 8172 out.go:177]

minikube logs命令的输出

-- /stdout --
I0317 11:54:41.610117 8172 cache_images.go:84] Images are preloaded, skipping loading
I0317 11:54:41.622105 8172 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0317 11:54:41.643022 8172 cni.go:84] Creating CNI manager for ""
I0317 11:54:41.643022 8172 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0317 11:54:41.643022 8172 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0317 11:54:41.643022 8172 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.222.45 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.222.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.222.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0317 11:54:41.643543 8172 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.20.222.45
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/cri-dockerd.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 172.20.222.45
      taints: []

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.20.222.45"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0

Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"

tcpEstablishedTimeout: 0s

Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"

tcpCloseWaitTimeout: 0s

I0317 11:54:41.643543 8172 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.222.45

[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0317 11:54:41.653837 8172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0317 11:54:41.660768 8172 binaries.go:44] Found k8s binaries, skipping transfer
I0317 11:54:41.670648 8172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0317 11:54:41.677216 8172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (441 bytes)
I0317 11:54:41.689024 8172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0317 11:54:41.700149 8172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
I0317 11:54:41.722147 8172 ssh_runner.go:195] Run: grep 172.20.222.45 control-plane.minikube.internal$ /etc/hosts
I0317 11:54:41.725242 8172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.222.45 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 11:54:41.733905 8172 certs.go:56] Setting up C:\Users\soyt3.minikube\profiles\minikube for IP: 172.20.222.45
I0317 11:54:41.733905 8172 certs.go:186] acquiring lock for shared ca certs: {Name:mkf86d344b37993b2c558758975e7a82d54d668b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:41.733905 8172 certs.go:200] generating minikubeCA CA: C:\Users\soyt3.minikube\ca.key
I0317 11:54:41.937799 8172 crypto.go:156] Writing cert to C:\Users\soyt3.minikube\ca.crt ...
I0317 11:54:41.937799 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\ca.crt: {Name:mk9656074d0134606f260316522ad96cca18b99c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:41.937799 8172 crypto.go:164] Writing key to C:\Users\soyt3.minikube\ca.key ...
I0317 11:54:41.937799 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\ca.key: {Name:mk969ffb7735558300635f850059b11c13424195 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:41.938792 8172 certs.go:200] generating proxyClientCA CA: C:\Users\soyt3.minikube\proxy-client-ca.key
I0317 11:54:42.400822 8172 crypto.go:156] Writing cert to C:\Users\soyt3.minikube\proxy-client-ca.crt ...
I0317 11:54:42.400822 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\proxy-client-ca.crt: {Name:mk68467df2250a1e5b50e2d4eb140d1981dba2cb Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.400822 8172 crypto.go:164] Writing key to C:\Users\soyt3.minikube\proxy-client-ca.key ...
I0317 11:54:42.400822 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\proxy-client-ca.key: {Name:mkbff88ed2f775655dc5cf7df5b81d446e88691c Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.401792 8172 certs.go:315] generating minikube-user signed cert: C:\Users\soyt3.minikube\profiles\minikube\client.key
I0317 11:54:42.401792 8172 crypto.go:68] Generating cert C:\Users\soyt3.minikube\profiles\minikube\client.crt with IP's: []
I0317 11:54:42.575378 8172 crypto.go:156] Writing cert to C:\Users\soyt3.minikube\profiles\minikube\client.crt ...
I0317 11:54:42.575378 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\profiles\minikube\client.crt: {Name:mkad9d204036052b49c6a0fc58715476d504bfe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.576380 8172 crypto.go:164] Writing key to C:\Users\soyt3.minikube\profiles\minikube\client.key ...
I0317 11:54:42.576380 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\profiles\minikube\client.key: {Name:mk501717adcb45f86fea5c048389db0ebe904408 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.576380 8172 certs.go:315] generating minikube signed cert: C:\Users\soyt3.minikube\profiles\minikube\apiserver.key.4a327d54
I0317 11:54:42.576380 8172 crypto.go:68] Generating cert C:\Users\soyt3.minikube\profiles\minikube\apiserver.crt.4a327d54 with IP's: [172.20.222.45 10.96.0.1 127.0.0.1 10.0.0.1]
I0317 11:54:42.835378 8172 crypto.go:156] Writing cert to C:\Users\soyt3.minikube\profiles\minikube\apiserver.crt.4a327d54 ...
I0317 11:54:42.835378 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\profiles\minikube\apiserver.crt.4a327d54: {Name:mk38f8fe7b61d724f415916d87926ddf88560e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.836377 8172 crypto.go:164] Writing key to C:\Users\soyt3.minikube\profiles\minikube\apiserver.key.4a327d54 ...
I0317 11:54:42.836377 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\profiles\minikube\apiserver.key.4a327d54: {Name:mk1713e4306566421ec1f9a7cbad85e78116ff13 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.836377 8172 certs.go:333] copying C:\Users\soyt3.minikube\profiles\minikube\apiserver.crt.4a327d54 -> C:\Users\soyt3.minikube\profiles\minikube\apiserver.crt
I0317 11:54:42.842378 8172 certs.go:337] copying C:\Users\soyt3.minikube\profiles\minikube\apiserver.key.4a327d54 -> C:\Users\soyt3.minikube\profiles\minikube\apiserver.key
I0317 11:54:42.842378 8172 certs.go:315] generating aggregator signed cert: C:\Users\soyt3.minikube\profiles\minikube\proxy-client.key
I0317 11:54:42.842378 8172 crypto.go:68] Generating cert C:\Users\soyt3.minikube\profiles\minikube\proxy-client.crt with IP's: []
I0317 11:54:42.991415 8172 crypto.go:156] Writing cert to C:\Users\soyt3.minikube\profiles\minikube\proxy-client.crt ...
I0317 11:54:42.991415 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\profiles\minikube\proxy-client.crt: {Name:mk3f82e931f08382cbc7643c2c36405fd628578d Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.992378 8172 crypto.go:164] Writing key to C:\Users\soyt3.minikube\profiles\minikube\proxy-client.key ...
I0317 11:54:42.992378 8172 lock.go:35] WriteFile acquiring C:\Users\soyt3.minikube\profiles\minikube\proxy-client.key: {Name:mk4aabc0f8cf016df659cea5ff971b81fc155d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0317 11:54:42.997413 8172 certs.go:401] found cert: C:\Users\soyt3.minikube\certs\C:\Users\soyt3.minikube\certs\ca-key.pem (1675 bytes)
I0317 11:54:42.997413 8172 certs.go:401] found cert: C:\Users\soyt3.minikube\certs\C:\Users\soyt3.minikube\certs\ca.pem (1074 bytes)
I0317 11:54:42.997413 8172 certs.go:401] found cert: C:\Users\soyt3.minikube\certs\C:\Users\soyt3.minikube\certs\cert.pem (1119 bytes)
I0317 11:54:42.997413 8172 certs.go:401] found cert: C:\Users\soyt3.minikube\certs\C:\Users\soyt3.minikube\certs\key.pem (1675 bytes)
I0317 11:54:42.998379 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0317 11:54:43.016222 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0317 11:54:43.032686 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0317 11:54:43.049521 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0317 11:54:43.066946 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0317 11:54:43.084804 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0317 11:54:43.104289 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0317 11:54:43.122289 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0317 11:54:43.139239 8172 ssh_runner.go:362] scp C:\Users\soyt3.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0317 11:54:43.157324 8172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0317 11:54:43.180452 8172 ssh_runner.go:195] Run: openssl version
I0317 11:54:43.194387 8172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0317 11:54:43.211829 8172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0317 11:54:43.215274 8172 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 17 03:54 /usr/share/ca-certificates/minikubeCA.pem
I0317 11:54:43.225044 8172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0317 11:54:43.239090 8172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0317 11:54:43.246630 8172 kubeadm.go:401] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.29.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.20.222.45 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\soyt3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0317 11:54:43.256248 8172 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0317 11:54:43.281914 8172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0317 11:54:43.299160 8172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0317 11:54:43.316435 8172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0317 11:54:43.324002 8172 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0317 11:54:43.324002 8172 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0317 11:54:43.474563 8172 kubeadm.go:322] W0317 03:54:43.474162 1479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0317 11:54:43.563103 8172 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0317 11:58:45.577232 8172 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0317 11:58:45.577232 8172 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0317 11:58:45.578355 8172 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
I0317 11:58:45.578355 8172 kubeadm.go:322] [preflight] Running pre-flight checks
I0317 11:58:45.578355 8172 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0317 11:58:45.578355 8172 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0317 11:58:45.578355 8172 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0317 11:58:45.578355 8172 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0317 11:58:45.581934 8172 out.go:204] - Generating certificates and keys ...
I0317 11:58:45.582451 8172 kubeadm.go:322] [certs] Using existing ca certificate authority
I0317 11:58:45.582451 8172 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0317 11:58:45.582451 8172 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0317 11:58:45.582955 8172 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [172.20.222.45 127.0.0.1 ::1]
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [172.20.222.45 127.0.0.1 ::1]
I0317 11:58:45.582970 8172 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0317 11:58:45.583520 8172 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0317 11:58:45.583520 8172 kubeadm.go:322] [certs] Generating "sa" key and public key
I0317 11:58:45.583520 8172 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0317 11:58:45.583520 8172 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0317 11:58:45.583520 8172 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0317 11:58:45.583520 8172 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0317 11:58:45.583520 8172 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0317 11:58:45.583520 8172 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0317 11:58:45.584083 8172 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0317 11:58:45.584083 8172 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0317 11:58:45.584083 8172 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0317 11:58:45.586823 8172 out.go:204] - Booting up control plane ...
I0317 11:58:45.586823 8172 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0317 11:58:45.587349 8172 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0317 11:58:45.587349 8172 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0317 11:58:45.587349 8172 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0317 11:58:45.587349 8172 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0317 11:58:45.587349 8172 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0317 11:58:45.587349 8172 kubeadm.go:322]
I0317 11:58:45.587854 8172 kubeadm.go:322] Unfortunately, an error has occurred:
I0317 11:58:45.587868 8172 kubeadm.go:322] timed out waiting for the condition
I0317 11:58:45.587868 8172 kubeadm.go:322]
I0317 11:58:45.587868 8172 kubeadm.go:322] This error is likely caused by:
I0317 11:58:45.587868 8172 kubeadm.go:322] - The kubelet is not running
I0317 11:58:45.587868 8172 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0317 11:58:45.587868 8172 kubeadm.go:322]
I0317 11:58:45.587868 8172 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0317 11:58:45.587868 8172 kubeadm.go:322] - 'systemctl status kubelet'
I0317 11:58:45.587868 8172 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0317 11:58:45.587868 8172 kubeadm.go:322]
I0317 11:58:45.588385 8172 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0317 11:58:45.588385 8172 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0317 11:58:45.588385 8172 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
I0317 11:58:45.588385 8172 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I0317 11:58:45.588385 8172 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0317 11:58:45.588385 8172 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
W0317 11:58:45.588901 8172 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [172.20.222.45 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [172.20.222.45 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0317 03:54:43.474162 1479 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

I0317 11:58:45.589420 8172 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I0317 11:58:47.162734 8172 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.5733141s)
I0317 11:58:47.174273 8172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0317 11:58:47.195286 8172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0317 11:58:47.202255 8172 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0317 11:58:47.202255 8172 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0317 11:58:47.233575 8172 kubeadm.go:322] W0317 03:58:47.233718 2840 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0317 11:58:47.320804 8172 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0317 12:02:48.078956 8172 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0317 12:02:48.078956 8172 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0317 12:02:48.080006 8172 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
I0317 12:02:48.080006 8172 kubeadm.go:322] [preflight] Running pre-flight checks
I0317 12:02:48.080006 8172 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0317 12:02:48.080006 8172 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0317 12:02:48.080006 8172 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0317 12:02:48.080006 8172 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0317 12:02:48.083743 8172 out.go:204] - Generating certificates and keys ...
I0317 12:02:48.084264 8172 kubeadm.go:322] [certs] Using existing ca certificate authority
I0317 12:02:48.084264 8172 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0317 12:02:48.084264 8172 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0317 12:02:48.084264 8172 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0317 12:02:48.084264 8172 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0317 12:02:48.084264 8172 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0317 12:02:48.084825 8172 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0317 12:02:48.084884 8172 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0317 12:02:48.084884 8172 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0317 12:02:48.084884 8172 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0317 12:02:48.084884 8172 kubeadm.go:322] [certs] Using the existing "sa" key
I0317 12:02:48.084884 8172 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0317 12:02:48.084884 8172 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0317 12:02:48.084884 8172 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0317 12:02:48.084884 8172 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0317 12:02:48.085404 8172 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0317 12:02:48.085404 8172 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0317 12:02:48.085404 8172 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0317 12:02:48.085404 8172 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0317 12:02:48.085404 8172 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0317 12:02:48.087670 8172 out.go:204] - Booting up control plane ...
I0317 12:02:48.088184 8172 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0317 12:02:48.088701 8172 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0317 12:02:48.088701 8172 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0317 12:02:48.088701 8172 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0317 12:02:48.088701 8172 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0317 12:02:48.088701 8172 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0317 12:02:48.088701 8172 kubeadm.go:322]
I0317 12:02:48.088701 8172 kubeadm.go:322] Unfortunately, an error has occurred:
I0317 12:02:48.089210 8172 kubeadm.go:322] timed out waiting for the condition
I0317 12:02:48.089210 8172 kubeadm.go:322]
I0317 12:02:48.089210 8172 kubeadm.go:322] This error is likely caused by:
I0317 12:02:48.089210 8172 kubeadm.go:322] - The kubelet is not running
I0317 12:02:48.089210 8172 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0317 12:02:48.089210 8172 kubeadm.go:322]
I0317 12:02:48.089210 8172 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0317 12:02:48.089210 8172 kubeadm.go:322] - 'systemctl status kubelet'
I0317 12:02:48.089210 8172 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0317 12:02:48.089210 8172 kubeadm.go:322]
I0317 12:02:48.089210 8172 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0317 12:02:48.089740 8172 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0317 12:02:48.089740 8172 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
I0317 12:02:48.089740 8172 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
I0317 12:02:48.089740 8172 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0317 12:02:48.089740 8172 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
I0317 12:02:48.089740 8172 kubeadm.go:403] StartCluster complete in 8m4.8431921s
I0317 12:02:48.089740 8172 cri.go:52] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0317 12:02:48.102029 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 12:02:48.121825 8172 cri.go:87] found id: ""
I0317 12:02:48.121825 8172 logs.go:279] 0 containers: []
W0317 12:02:48.121825 8172 logs.go:281] No container was found matching "kube-apiserver"
I0317 12:02:48.121825 8172 cri.go:52] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0317 12:02:48.133287 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 12:02:48.154153 8172 cri.go:87] found id: ""
I0317 12:02:48.154153 8172 logs.go:279] 0 containers: []
W0317 12:02:48.154153 8172 logs.go:281] No container was found matching "etcd"
I0317 12:02:48.154153 8172 cri.go:52] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0317 12:02:48.165956 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 12:02:48.185244 8172 cri.go:87] found id: ""
I0317 12:02:48.185244 8172 logs.go:279] 0 containers: []
W0317 12:02:48.185244 8172 logs.go:281] No container was found matching "coredns"
I0317 12:02:48.185244 8172 cri.go:52] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0317 12:02:48.197514 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 12:02:48.216910 8172 cri.go:87] found id: ""
I0317 12:02:48.216910 8172 logs.go:279] 0 containers: []
W0317 12:02:48.216910 8172 logs.go:281] No container was found matching "kube-scheduler"
I0317 12:02:48.216910 8172 cri.go:52] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0317 12:02:48.229229 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 12:02:48.250262 8172 cri.go:87] found id: ""
I0317 12:02:48.250262 8172 logs.go:279] 0 containers: []
W0317 12:02:48.250262 8172 logs.go:281] No container was found matching "kube-proxy"
I0317 12:02:48.250262 8172 cri.go:52] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0317 12:02:48.261756 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0317 12:02:48.282314 8172 cri.go:87] found id: ""
I0317 12:02:48.282314 8172 logs.go:279] 0 containers: []
W0317 12:02:48.282314 8172 logs.go:281] No container was found matching "kubernetes-dashboard"
I0317 12:02:48.282314 8172 cri.go:52] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0317 12:02:48.294094 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0317 12:02:48.314956 8172 cri.go:87] found id: ""
I0317 12:02:48.314956 8172 logs.go:279] 0 containers: []
W0317 12:02:48.314956 8172 logs.go:281] No container was found matching "storage-provisioner"
I0317 12:02:48.314956 8172 cri.go:52] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0317 12:02:48.326713 8172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 12:02:48.347199 8172 cri.go:87] found id: ""
I0317 12:02:48.347199 8172 logs.go:279] 0 containers: []
W0317 12:02:48.347199 8172 logs.go:281] No container was found matching "kube-controller-manager"
I0317 12:02:48.347199 8172 logs.go:124] Gathering logs for container status ...
I0317 12:02:48.347199 8172 ssh_runner.go:195] Run: /bin/bash -c "sudo which crictl || echo crictl ps -a || sudo docker ps -a"
I0317 12:02:48.370780 8172 logs.go:124] Gathering logs for kubelet ...
I0317 12:02:48.370780 8172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 12:02:48.459029 8172 logs.go:124] Gathering logs for dmesg ...
I0317 12:02:48.459029 8172 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 12:02:48.469030 8172 logs.go:124] Gathering logs for describe nodes ...
I0317 12:02:48.469030 8172 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0317 12:02:48.521749 8172 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
E0317 04:02:48.514849 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.515113 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.516563 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.517981 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.519391 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
E0317 04:02:48.514849 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.515113 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.516563 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.517981 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0317 04:02:48.519391 3788 memcache.go:238] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0317 12:02:48.521749 8172 logs.go:124] Gathering logs for Docker ...
I0317 12:02:48.521749 8172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
W0317 12:02:48.558488 8172 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0317 03:58:47.233718 2840 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0317 12:02:48.558488 8172 out.go:239] *
W0317 12:02:48.559552 8172 out.go:239] X 开启 cluster 时出错: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0317 03:58:47.233718 2840 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0317 12:02:48.559611 8172 out.go:239] *
W0317 12:02:48.560677 8172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0317 12:02:48.567149 8172 out.go:177]
W0317 12:02:48.572148 8172 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0317 03:58:47.233718 2840 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

W0317 12:02:48.573148 8172 out.go:239] * 建议:检查 'journalctl -xeu kubelet' 的输出,尝试启动 minikube 时添加参数 --extra-config=kubelet.cgroup-driver=systemd
W0317 12:02:48.573148 8172 out.go:239] * Related issue: #4172
I0317 12:02:48.578148 8172 out.go:177]

使用的操作系统版本
设备名称 DESKTOP-MJLDMG1
处理器 Intel(R) Core(TM) i5-9400 CPU @ 2.90GHz 2.90 GHz
机带 RAM 16.0 GB (15.9 GB 可用)
设备 ID 48A58AE2-A94E-48B6-8382-6751DAC7EF57
产品 ID 00331-10000-00001-AA524
系统类型 64 位操作系统, 基于 x64 的处理器
笔和触控 没有可用于此显示器的笔或触控输入

版本 Windows 10 专业版
版本号 21H2
安装日期 ‎2021/‎5/‎6
操作系统内部版本 19044.2728
体验 Windows Feature Experience Pack 120.2212.4190.0

@shuxingxin shuxingxin added the l/zh-CN Issues in or relating to Chinese label Mar 17, 2023
@easy8in
Copy link

easy8in commented Mar 17, 2023

先删除一下全部本地的 minikube cluster
minikube delete --all --purge

启动命令
minikube start --driver=docker --force --extra-config=kubelet.cgroup-driver=systemd --cni calico --container-runtime=containerd --registry-mirror=https://registry.docker-cn.com

@shuxingxin
Copy link
Author

先删除一下全部本地的 minikube cluster minikube delete --all --purge

启动命令 minikube start --driver=docker --force --extra-config=kubelet.cgroup-driver=systemd --cni calico --container-runtime=containerd --registry-mirror=https://registry.docker-cn.com

谢谢你的回复,我后来用了Docker-Desktop自带的Kubernetes,自己安装总是各种问题。

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 18, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
l/zh-CN Issues in or relating to Chinese lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants