Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" #5141

Closed
AlekseySkovorodnikov opened this issue Aug 20, 2019 · 3 comments
Labels
co/none-driver triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@AlekseySkovorodnikov
Copy link

minikube start --vm-driver=none --memory 4096 --cpus=2:

**
root@instance-274528:~# minikube start --vm-driver=none --memory 4096 --cpus=2

  • minikube v1.3.1 on Ubuntu 18.04
  • Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
  • Starting existing none VM for "minikube" ...
  • Waiting for the host to be provisioned ...
  • Preparing Kubernetes v1.15.2 on Docker 19.03.1 ...
    • kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
  • Relaunching Kubernetes using kubeadm ...

X Error restarting cluster: waiting for apiserver: timed out waiting for the condition
*

But it is a second attenpt of launching that command. In the first time I get same text: root@instance-274528:~# minikube start --vm-driver=none --memory 4096 --cpus=2 * minikube v1.3.1 on Ubuntu 18.04 * Running on localhost (CPUs=4, Memory=7976MB, Disk=99067MB) ... * OS release is Ubuntu 18.04.3 LTS * Preparing Kubernetes v1.15.2 on Docker 19.03.1 ... - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf * Downloading kubeadm v1.15.2 * Downloading kubelet v1.15.2 * Pulling images ... * Launching Kubernetes ... ^[ ^[q* X Error starting cluster: cmd failed: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.24 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.24 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1
*

**root@instance-274528:~# minikube logs

  • ==> Docker <==
  • -- Logs begin at Mon 2019-08-19 13:50:52 UTC, end at Tue 2019-08-20 09:27:21 UTC. --
  • Aug 19 14:52:49 instance-274528 dockerd[19665]: time="2019-08-19T14:52:49.127742018Z" level=info msg="Processing signal 'terminated'"
  • Aug 19 14:52:49 instance-274528 dockerd[19665]: time="2019-08-19T14:52:49.128988880Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby
  • Aug 19 14:52:49 instance-274528 dockerd[19665]: time="2019-08-19T14:52:49.129447187Z" level=info msg="Daemon shutdown complete"
  • Aug 19 14:52:49 instance-274528 systemd[1]: Stopped Docker Application Container Engine.
  • Aug 19 14:52:49 instance-274528 systemd[1]: Starting Docker Application Container Engine...
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.205498972Z" level=info msg="Starting up"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.206338136Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207167430Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207193083Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207216133Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207227935Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207320101Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000766ae0, CONNECTING" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207357265Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207855605Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000766ae0, READY" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.208830985Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.208863361Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.208890896Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.208922306Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.209032136Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00079b010, CONNECTING" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.209498114Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00079b010, READY" module=grpc
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215422527Z" level=warning msg="Your kernel does not support swap memory limit"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215449758Z" level=warning msg="Your kernel does not support cgroup rt period"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215458338Z" level=warning msg="Your kernel does not support cgroup rt runtime"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215611788Z" level=info msg="Loading containers: start."
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.317260686Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.446644119Z" level=info msg="Loading containers: done."
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.494049795Z" level=info msg="Docker daemon" commit=74b1e89 graphdriver(s)=overlay2 version=19.03.1
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.494108147Z" level=info msg="Daemon has completed initialization"
  • Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.505007228Z" level=info msg="API listen on /var/run/docker.sock"
  • Aug 19 14:52:49 instance-274528 systemd[1]: Started Docker Application Container Engine.
  • ==> container status <==
  • time="2019-08-20T09:27:31Z" level=fatal msg="failed to connect: failed to connect: context deadline exceeded"
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • ==> dmesg <==
  • [Aug19 13:50] Support mounting host directories into pods #2
  • [ +0.008472] Support kubernetes dashboard. #3
  • [ +0.076618] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
  • [ +0.213797] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
  • [ +0.468437] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
  • [ +0.072661] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
  • [ +9.025269] sd 2:0:0:0: Power-on or device reset occurred
  • [ +0.007590] GPT:Primary header thinks Alt. header is not at the end of the disk.
  • [ +0.000001] GPT:4612095 != 209715199
  • [ +0.000000] GPT:Alternate GPT header not at the end of the disk.
  • [ +0.000001] GPT:4612095 != 209715199
  • [ +0.000000] GPT: Use GNU Parted to correct GPT errors.
  • [Aug19 13:51] new mount options do not match the existing superblock, will be ignored
  • [Aug19 14:44] kauditd_printk_skb: 5 callbacks suppressed
  • [Aug19 15:25] systemd: 43 output lines suppressed due to ratelimiting
  • ==> kernel <==
  • 09:27:31 up 19:36, 2 users, load average: 1.16, 0.49, 0.48
  • Linux instance-274528 4.15.0-48-generic ssh onto node #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • PRETTY_NAME="Ubuntu 18.04.3 LTS"
  • ==> kubelet <==
  • -- Logs begin at Mon 2019-08-19 13:50:52 UTC, end at Tue 2019-08-20 09:27:31 UTC. --
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: Flag --resolv-conf has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: I0820 09:27:30.842387 22223 server.go:425] Version: v1.15.2
  • Aug 20 09:27:30 instance-274528 kubelet[22223]: I0820 09:27:30.842597 22223 plugins.go:103] No cloud provider specified.
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015031 22223 server.go:661] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015360 22223 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015385 22223 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015543 22223 container_manager_linux.go:286] Creating device plugin manager: true
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015574 22223 state_mem.go:36] [cpumanager] initializing new in-memory state store
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015675 22223 state_mem.go:84] [cpumanager] updated default cpuset: ""
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015687 22223 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015788 22223 kubelet.go:281] Adding pod path: /etc/kubernetes/manifests
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015819 22223 kubelet.go:306] Watching apiserver
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: E0820 09:27:31.017220 22223 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: E0820 09:27:31.018134 22223 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: E0820 09:27:31.018346 22223 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.018934 22223 client.go:75] Connecting to docker on unix:///var/run/docker.sock
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.019131 22223 client.go:104] Start docker client with request timeout=2m0s
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: W0820 09:27:31.040558 22223 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.040607 22223 docker_service.go:238] Hairpin mode set to "hairpin-veth"
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: W0820 09:27:31.040783 22223 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.044724 22223 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.065900 22223 docker_service.go:258] Docker Info: &{ID:HMJN:PVXM:7MLD:7HKS:WULS:CGBN:WJPJ:PQFZ:LOO4:VQWX:46VN:TBWX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2019-08-20T09:27:31.045631296Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.15.0-48-generic OperatingSystem:Ubuntu 18.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00084e150 NCPU:4 MemTotal:8363880448 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:instance-274528 Labels:[] ExperimentalBuild:false ServerVersion:19.03.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:894b81a4b802e4eb2a91d1ce216b8817763c29fb Expected:894b81a4b802e4eb2a91d1ce216b8817763c29fb} RuncCommit:{ID:425e105d5a03fabd737a126ad93d62a9eeede87f Expected:425e105d5a03fabd737a126ad93d62a9eeede87f} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]}
  • Aug 20 09:27:31 instance-274528 kubelet[22223]: F0820 09:27:31.066052 22223 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
  • Aug 20 09:27:31 instance-274528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
  • Aug 20 09:27:31 instance-274528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
    **:

I try to install QlikSense software on Kubernetes cluster like in that manual:
http://www.bardess.com/qlik-sense-on-kubernetes-a-beginners-guide/

Ubuntu 18.04.2 LTS:

@tstromberg
Copy link
Contributor

kubelet[22223]: F0820 09:27:31.066052 22223 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

This issue appears to be a duplicate of #4172, so I will close this one so that we may centralize the content relating to the issue. If you feel that this issue is not in fact a duplicate, please feel free to re-open it. If you have additional information to share, please add it to the new issue.

Thank you for reporting this!

@tstromberg tstromberg changed the title cannot launch minikube none: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" Aug 20, 2019
@tstromberg tstromberg added co/none-driver triage/duplicate Indicates an issue is a duplicate of other open issue. labels Aug 20, 2019
@afbjorklund
Copy link
Collaborator

Wonder if Docker has started to configure "systemd" as the default, I seem to recall that it used to start out with "cgroupfs" until you explicitly configured /etc/docker/daemon.json to use the other one... ?
There seems to be more of these reports from Ubuntu lately, before it was just RedHat that used to do some custom configuration (like switch the cgroup driver and the "aufs" storage driver, for instance)

@afbjorklund
Copy link
Collaborator

Of course, Docker has the opposite opinion from Kubernetes:

# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

3 participants