Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: detect cgroupfs driver for kubelet configuration #4172

Closed
itsallonetome opened this issue Apr 29, 2019 · 31 comments
Closed

none: detect cgroupfs driver for kubelet configuration #4172

itsallonetome opened this issue Apr 29, 2019 · 31 comments
Assignees
Labels
co/kubelet Kubelet config issues co/none-driver good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues
Milestone

Comments

@itsallonetome
Copy link

itsallonetome commented Apr 29, 2019

sudo minikube start --vm-driver=none --docker-opt bip='172.18.0.1/24'

o   minikube v1.0.0 on linux (amd64)
$   Downloading Kubernetes v1.14.0 images in the background ...
2019/04/29 16:11:55 Unable to read "/home/k8s/.docker/config.json": open /home/k8s/.docker/config.json: no such file or directory
2019/04/29 16:11:55 No matching credentials were found, falling back on anonymous
>   Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
,,,
-   "minikube" IP address is 10.139.24.86
-   Configuring Docker as the container runtime ...
    - opt bip=172.18.0.1/24
-   Version of container runtime is 18.09.5
:   Waiting for image downloads to complete ...
-   Preparing Kubernetes environment ...
@   Downloading kubeadm v1.14.0
@   Downloading kubelet v1.14.0
-   Pulling images required by Kubernetes v1.14.0 ...
-   Launching Kubernetes v1.14.0 using kubeadm ...

!   Error starting cluster: kubeadm init:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI


: running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

 output: [init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING FileExisting-ebtables]: ebtables not found in system path
        [WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.139.24.86 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.139.24.86 127.0.0.1 ::1]
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
: running command:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

.: exit status 1

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new

ubuntu xenial 16.04

The created file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
has in it --cgroup-driver=cgroupfs when the docker_daemon.json has "exec-opts": ["native.cgroupdriver=systemd"]

minikube should detect the docker cgroup driver and use this for it's created config files.

@tstromberg tstromberg added the co/kubelet Kubelet config issues label May 2, 2019
@tstromberg
Copy link
Contributor

Do you mind attaching the output of minikube logs when this failure happens?

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 2, 2019
@tstromberg tstromberg changed the title minikube doesn't start with docker driver: kubelet wrong cgroup driver none: kubelet fails to come up when docker_daemon.json has native.cgroupdriver=systemd May 2, 2019
@itsallonetome itsallonetome changed the title none: kubelet fails to come up when docker_daemon.json has native.cgroupdriver=systemd vm-driver=none: kubelet fails to come up when docker_daemon.json has native.cgroupdriver=systemd May 3, 2019
@itsallonetome
Copy link
Author

systemctl --no-pager status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-03 12:06:43 BST; 442ms ago
     Docs: http://kubernetes.io/docs/
  Process: 14040 ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kub                                                                                                                                                                        elet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --containe                                                                                                                                                                        r-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/mani                                                                                                                                                                        fests (code=exited, status=255)
 Main PID: 14040 (code=exited, status=255)

May 03 12:06:43 ut011815 systemd[1]: kubelet.service: Unit entered failed state.
May 03 12:06:43 ut011815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
root@ut011815:~# systemctl --no-pager status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Fri 2019-05-03 12:06:43 BST; 9s ago
     Docs: http://kubernetes.io/docs/
  Process: 14040 ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests (code=exited, status=255)
 Main PID: 14040 (code=exited, status=255)

May 03 12:06:43 ut011815 systemd[1]: kubelet.service: Unit entered failed state.
May 03 12:06:43 ut011815 systemd[1]: kubelet.service: Failed with result 'exit-code'.

@itsallonetome
Copy link
Author

journalctl -xeu kubelet

--
-- The start-up result is done.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --allow-privileged has been deprecated, will be removed in a future version
May 03 12:08:05 ut011815 kubelet[14359]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.120881   14359 server.go:417] Version: v1.14.1
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.121252   14359 plugins.go:103] No cloud provider specified.
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162127   14359 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162385   14359 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162411   14359 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162504   14359 container_manager_linux.go:286] Creating device plugin manager: true
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162527   14359 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162609   14359 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162618   14359 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162684   14359 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162701   14359 kubelet.go:304] Watching apiserver
May 03 12:08:05 ut011815 kubelet[14359]: E0503 12:08:05.174978   14359 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:05 ut011815 kubelet[14359]: E0503 12:08:05.175108   14359 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:05 ut011815 kubelet[14359]: E0503 12:08:05.175235   14359 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection re
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.179207   14359 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.179413   14359 client.go:104] Start docker client with request timeout=2m0s
May 03 12:08:05 ut011815 kubelet[14359]: W0503 12:08:05.180725   14359 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.180955   14359 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 03 12:08:05 ut011815 kubelet[14359]: W0503 12:08:05.181230   14359 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
May 03 12:08:05 ut011815 kubelet[14359]: W0503 12:08:05.182718   14359 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.183822   14359 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.200671   14359 docker_service.go:258] Docker Info: &{ID:CWU4:FQIX:DW5X:L4KF:UUZP:LNHG:XQUX:DCBN:6N4O:BRGA:6U3D:IYED Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Suppor
May 03 12:08:05 ut011815 kubelet[14359]: F0503 12:08:05.201511   14359 server.go:265] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
May 03 12:08:05 ut011815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 03 12:08:05 ut011815 systemd[1]: kubelet.service: Unit entered failed state.
May 03 12:08:05 ut011815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
May 03 12:08:15 ut011815 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
May 03 12:08:15 ut011815 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --allow-privileged has been deprecated, will be removed in a future version
May 03 12:08:15 ut011815 kubelet[14400]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.362393   14400 server.go:417] Version: v1.14.1
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.362628   14400 plugins.go:103] No cloud provider specified.
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.408857   14400 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409109   14400 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409122   14400 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409202   14400 container_manager_linux.go:286] Creating device plugin manager: true
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409224   14400 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409293   14400 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409306   14400 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409371   14400 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409388   14400 kubelet.go:304] Watching apiserver
May 03 12:08:15 ut011815 kubelet[14400]: E0503 12:08:15.430140   14400 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:15 ut011815 kubelet[14400]: E0503 12:08:15.430280   14400 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection re
May 03 12:08:15 ut011815 kubelet[14400]: E0503 12:08:15.430354   14400 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.430902   14400 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.431074   14400 client.go:104] Start docker client with request timeout=2m0s
May 03 12:08:15 ut011815 kubelet[14400]: W0503 12:08:15.432248   14400 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.432269   14400 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 03 12:08:15 ut011815 kubelet[14400]: W0503 12:08:15.432407   14400 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
May 03 12:08:15 ut011815 kubelet[14400]: W0503 12:08:15.433809   14400 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.434735   14400 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.450995   14400 docker_service.go:258] Docker Info: &{ID:CWU4:FQIX:DW5X:L4KF:UUZP:LNHG:XQUX:DCBN:6N4O:BRGA:6U3D:IYED Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Suppor
May 03 12:08:15 ut011815 kubelet[14400]: F0503 12:08:15.451090   14400 server.go:265] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Unit entered failed state.
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
...skipping...
--
-- The start-up result is done.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --allow-privileged has been deprecated, will be removed in a future version
May 03 12:08:05 ut011815 kubelet[14359]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.120881   14359 server.go:417] Version: v1.14.1
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.121252   14359 plugins.go:103] No cloud provider specified.
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162127   14359 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162385   14359 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162411   14359 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162504   14359 container_manager_linux.go:286] Creating device plugin manager: true
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162527   14359 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162609   14359 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162618   14359 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162684   14359 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.162701   14359 kubelet.go:304] Watching apiserver
May 03 12:08:05 ut011815 kubelet[14359]: E0503 12:08:05.174978   14359 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:05 ut011815 kubelet[14359]: E0503 12:08:05.175108   14359 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:05 ut011815 kubelet[14359]: E0503 12:08:05.175235   14359 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection re
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.179207   14359 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.179413   14359 client.go:104] Start docker client with request timeout=2m0s
May 03 12:08:05 ut011815 kubelet[14359]: W0503 12:08:05.180725   14359 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.180955   14359 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 03 12:08:05 ut011815 kubelet[14359]: W0503 12:08:05.181230   14359 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
May 03 12:08:05 ut011815 kubelet[14359]: W0503 12:08:05.182718   14359 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.183822   14359 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 03 12:08:05 ut011815 kubelet[14359]: I0503 12:08:05.200671   14359 docker_service.go:258] Docker Info: &{ID:CWU4:FQIX:DW5X:L4KF:UUZP:LNHG:XQUX:DCBN:6N4O:BRGA:6U3D:IYED Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Suppor
May 03 12:08:05 ut011815 kubelet[14359]: F0503 12:08:05.201511   14359 server.go:265] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
May 03 12:08:05 ut011815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 03 12:08:05 ut011815 systemd[1]: kubelet.service: Unit entered failed state.
May 03 12:08:05 ut011815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
May 03 12:08:15 ut011815 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
May 03 12:08:15 ut011815 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --allow-privileged has been deprecated, will be removed in a future version
May 03 12:08:15 ut011815 kubelet[14400]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.362393   14400 server.go:417] Version: v1.14.1
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.362628   14400 plugins.go:103] No cloud provider specified.
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.408857   14400 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409109   14400 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409122   14400 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409202   14400 container_manager_linux.go:286] Creating device plugin manager: true
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409224   14400 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409293   14400 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409306   14400 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409371   14400 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.409388   14400 kubelet.go:304] Watching apiserver
May 03 12:08:15 ut011815 kubelet[14400]: E0503 12:08:15.430140   14400 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:15 ut011815 kubelet[14400]: E0503 12:08:15.430280   14400 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection re
May 03 12:08:15 ut011815 kubelet[14400]: E0503 12:08:15.430354   14400 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.430902   14400 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.431074   14400 client.go:104] Start docker client with request timeout=2m0s
May 03 12:08:15 ut011815 kubelet[14400]: W0503 12:08:15.432248   14400 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.432269   14400 docker_service.go:238] Hairpin mode set to "hairpin-veth"
May 03 12:08:15 ut011815 kubelet[14400]: W0503 12:08:15.432407   14400 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
May 03 12:08:15 ut011815 kubelet[14400]: W0503 12:08:15.433809   14400 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.434735   14400 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
May 03 12:08:15 ut011815 kubelet[14400]: I0503 12:08:15.450995   14400 docker_service.go:258] Docker Info: &{ID:CWU4:FQIX:DW5X:L4KF:UUZP:LNHG:XQUX:DCBN:6N4O:BRGA:6U3D:IYED Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Suppor
May 03 12:08:15 ut011815 kubelet[14400]: F0503 12:08:15.451090   14400 server.go:265] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Unit entered failed state.
May 03 12:08:15 ut011815 systemd[1]: kubelet.service: Failed with result 'exit-code'.

@itsallonetome
Copy link
Author

/etc/docker/daemon.json

{
  "default-address-pools":
    [
      {"base": "172.18.0.0/16",
       "size": 24}
    ],
  "bip": "172.18.1.1/24",
  "mtu": 1500,
  "dns": ["10.88.16.1","10.88.16.2"],

  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

@itsallonetome
Copy link
Author

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests

[Install]

@itsallonetome
Copy link
Author

All that was on a fresh Ubuntu 16.04 VM, with all current patches applied.

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 14, 2019
@tstromberg tstromberg added the r/2019q2 Issue was last reviewed 2019q2 label May 24, 2019
@tstromberg tstromberg changed the title vm-driver=none: kubelet fails to come up when docker_daemon.json has native.cgroupdriver=systemd none: kubelet: "cgroupfs" is different from docker cgroup driver: "systemd" Jul 16, 2019
@tstromberg tstromberg added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 20, 2019
@tstromberg tstromberg changed the title none: kubelet: "cgroupfs" is different from docker cgroup driver: "systemd" none: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" Aug 20, 2019
@tstromberg
Copy link
Contributor

Apparently, we should switch cgroup driver, from cgroupfs to systemd (#4770).

@afbjorklund mentioned this workaround for CentOS:

sudo minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd

@afbjorklund
Copy link
Collaborator

Let's call the extra config a "head start", but actually it is kubeadm that recommends switching.

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cgroup-drivers

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-control-plane-node

I think it's a "when on systemd..." thing. And that cgroupfs is still fine, if using a regular init ?

@afbjorklund
Copy link
Collaborator

Wonder why the detection is not working, the documentation said that it would (for Docker) ?

When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime.

The other ticket (#4770) was about changing the default for the minikube VM, this one was about none (using the cgroup driver already selected by the user)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 20, 2019

As far as I know, the default Docker installation will use "cgroupfs".
These installations all seem to have switched to use "systemd" ?

Possibly using the instructions given by kubeadm, but anyway...
The best would be if we could auto-detect, like kubeadm claims to do.

https://github.com/kubernetes/kubernetes/blob/v1.15.3/cmd/kubeadm/app/util/cgroupdriver.go#L40_L46

docker info -f {{.CgroupDriver}}

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed r/2019q2 Issue was last reviewed 2019q2 priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 20, 2019
@moorthi07
Copy link

minikube start failed and refferred me to this issue.

I had this error:
Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
❗ This VM is having trouble accessing https://registry.k8s.io

and more failed following this.

The cause : VPN . The hyperkit VM didn't have internet bridge configured thus had no access to all urls.
and it worked turning off VPN.
Hope this helps some one.

@scientiacoder
Copy link

For anyone who ended up with this issue: using a kubernetes version prior to 1.24 fixed it for me.

minikube start --kubernetes-version=v1.23.12

You are legend, bro

@mdalgitsis
Copy link

For anyone who ended up with this issue: using a kubernetes version prior to 1.24 fixed it for me.
minikube start --kubernetes-version=v1.23.12

You are legend, bro

Indeed!!!!!!

@aditodkar
Copy link

For anyone who ended up with this issue: using a kubernetes version prior to 1.24 fixed it for me.

minikube start --kubernetes-version=v1.23.12

This worked for me. Thanks a lot :)

@ykcai
Copy link

ykcai commented Jan 16, 2023

Can we use the latest version now or is this still a problem?

@rogolius
Copy link

Can we use the latest version now or is this still a problem?

Still a problem with the latest version... :(

@rogolius
Copy link

For me, it works also with 1.24 version. Not quite the last version, but it's newer than 1.23. I hope this helps.
minikube start --kubernetes-version=v1.24.9

@conradwt
Copy link

I was able to install K8s v1.25.6 without error but I'm still seeing issues with 1.26.1. BTW, I used the following command:

minikube stop && minikube delete && minikube start --kubernetes-version=v1.25.6

@saicharana731
Copy link

Hi I am facing same issue while installing on windows 10 , but the steps listed are not for windows , any advise please for windows

@GuddyTech
Copy link

Run minikube delete in your terminal to remove the existing container
Run minikube start --kubernetes-version=v1.23.12
Then
Run minikube kubectl -- get pods -A to see your running kubelets

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kubelet Kubelet config issues co/none-driver good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. top-10-issues Top 10 support issues
Projects
None yet
Development

No branches or pull requests