Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kic drivers: mount more linux folders into the container for unstandard /lib/modules/ #8370

Open
wakenhole opened this issue Jun 4, 2020 · 17 comments
Labels
co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@wakenhole
Copy link

Steps to reproduce the issue:

export http_proxy="{MY Proxy addr}"
export https_proxy="{MY Proxy addr}"
export no_proxy="localhost,127.0.0.1,192.168.99.0/24,10.96.0.0/12,192.168.39.0/24"

minikube start --docker-env http_proxy=$http_proxy --docker-env https_proxy=$https_proxy --docker-env no_proxy=$no_proxy

Full output of failed command:

I0604 19:52:34.906603 30956 logs.go:117] Gathering logs for kube-apiserver [d02e5236b2cf] ...
I0604 19:52:34.906623 30956 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 d02e5236b2cf"
I0604 19:52:34.951583 30956 logs.go:117] Gathering logs for etcd [fcbe7846e192] ...
I0604 19:52:34.951603 30956 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 fcbe7846e192"
I0604 19:52:34.993177 30956 logs.go:117] Gathering logs for kube-scheduler [8e201d6ef5b7] ...
I0604 19:52:34.993196 30956 ssh_runner.go:148] Run: /bin/bash -c "docker logs --tail 400 8e201d6ef5b7"
I0604 19:52:35.032671 30956 logs.go:117] Gathering logs for describe nodes ...
I0604 19:52:35.032693 30956 ssh_runner.go:148] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0604 19:52:39.438305 30956 ssh_runner.go:188] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.405567143s)
I0604 19:52:39.438433 30956 logs.go:117] Gathering logs for container status ...
I0604 19:52:39.438470 30956 ssh_runner.go:148] Run: /bin/bash -c "sudo which crictl || echo crictl ps -a || sudo docker ps -a"
W0604 19:52:39.485743 30956 out.go:201] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.3.0-53-generic
DOCKER_VERSION: 19.03.2
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.509685 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W0604 10:50:06.310736 11552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-53-generic\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0604 10:50:07.762778 11552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0604 10:50:07.763580 11552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

@sharifelgamal
Copy link
Collaborator

Looks like kubeadm init times out for some reason. Can you post some extra detail, like what OS and driver you're using? Can you also post the full output of minikube start -v=5 --alsologtostderr and minikube logs?

@sharifelgamal sharifelgamal added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. kind/support Categorizes issue or PR as a support question. labels Jun 4, 2020
@croensch
Copy link

I believe i have the same error. I am also using a proxy but all that is configured in the environment (and also with docker-compose) so i should not have to pass anything. It seems minikube downloaded k8s just fine. I did minikube start --driver=docker:

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W0618 11:25:15.794037    3025 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.3.0-53-generic\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0618 11:25:17.242599    3025 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0618 11:25:17.243525    3025 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher

Running with -v=5 revealed ❗ You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.17.0.3).

Since my system has defined both uppercase and lowercase for maximum compatibility...

🌐  Gefundene Netzwerkoptionen:
    ▪ HTTP_PROXY=http://_proxy.company.local:3128_
    ▪ HTTPS_PROXY=http://_proxy.company.local:3128_
    ▪ NO_PROXY=localhost,127.0.0.1,_.company.local_,10.244.0.0/16,172.17.0.0/16
    ▪ http_proxy=http://_proxy.company.local:3128_
    ▪ https_proxy=http://_proxy.company.local:3128_
    ▪ no_proxy=localhost,127.0.0.1,_.company.local_,10.244.0.0/16,172.17.0.0/16

...i needed to update both...

export NO_PROXY=$NO_PROXY,10.244.0.0/16,172.17.0.0/16
export no_proxy=$no_proxy,10.244.0.0/16,172.17.0.0/16

...which looks good [the ❗ error now dissappeared]...

🐳  Vorbereiten von Kubernetes v1.18.3 auf Docker 19.03.2...
    ▪ env HTTP_PROXY=http://_proxy.company.local:3128_
    ▪ env HTTPS_PROXY=http://_proxy.company.local:3128_
    ▪ env NO_PROXY=localhost,127.0.0.1,_.company.local_,10.244.0.0/16,172.17.0.0/16
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16

...but still i got the same error!

@sharifelgamal Ubuntu 18.04

@croensch
Copy link

So i read starting-a-cluster, which says that i still have to pass something, and tried minikube start --docker-env HTTPS_PROXY=$HTTPS_PROXY --docker-env HTTP_PROXY=$HTTP_PROXY --docker-env NO_PROXY=$NO_PROXY --docker-env https_proxy=$https_proxy --docker-env http_proxy=$http_proxy --docker-env no_proxy=$no_proxy --driver=docker -v=5 --alsologtostderr but the error messages appear to remain the same (also with --kubernetes-version=1.17.7).
I also coded the vars into my /etc/environment and rebooted.
I also ran minikube delete between runs.

@tstromberg tstromberg changed the title minikube start failure proxy: Error writing Crisocket information for the control-plane node: timed out waiting for the condition Jul 22, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Jul 22, 2020

Can you please try starting up without the --docker-env flags?

It shouldn't be necessary, as minikube will program it automatically based on your environment.

If you are still having issues, can you please share the output of minikube logs?

@tstromberg tstromberg added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jul 22, 2020
@koji117
Copy link

koji117 commented Aug 7, 2020

Hi, I am facing the same issue.
I have tried minikube start --driver=docker with and without --docker-env flags but got the same result.
Below is the output of minikube logs

==> Docker <==
-- Logs begin at Fri 2020-08-07 06:46:50 UTC, end at Fri 2020-08-07 06:52:51 UTC. --
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.172217418Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00070d980, CONNECTING" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.172283139Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.174468234Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00070d980, READY" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.175313311Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.175325548Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.175347997Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.175360625Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.175398125Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000152900, CONNECTING" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.175894418Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000152900, READY" module=grpc
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.188443924Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.213317751Z" level=info msg="Loading containers: start."
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.268115558Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.298259896Z" level=info msg="Loading containers: done."
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.364578701Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.364666326Z" level=info msg="Daemon has completed initialization"
Aug 07 06:46:51 minikube systemd[1]: Started Docker Application Container Engine.
Aug 07 06:46:51 minikube dockerd[117]: time="2020-08-07T06:46:51.384361158Z" level=info msg="API listen on /run/docker.sock"
Aug 07 06:46:55 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Aug 07 06:46:55 minikube systemd[1]: Stopping Docker Application Container Engine...
Aug 07 06:46:55 minikube dockerd[117]: time="2020-08-07T06:46:55.341712930Z" level=info msg="Processing signal 'terminated'"
Aug 07 06:46:55 minikube dockerd[117]: time="2020-08-07T06:46:55.342474260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Aug 07 06:46:55 minikube dockerd[117]: time="2020-08-07T06:46:55.342825607Z" level=info msg="Daemon shutdown complete"
Aug 07 06:46:55 minikube systemd[1]: docker.service: Succeeded.
Aug 07 06:46:55 minikube systemd[1]: Stopped Docker Application Container Engine.
Aug 07 06:46:55 minikube systemd[1]: Starting Docker Application Container Engine...
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.452280214Z" level=info msg="Starting up"
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.454635680Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.454652834Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.454676990Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.454698377Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.454808630Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000155e20, CONNECTING" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455093926Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000155e20, READY" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455700043Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455710541Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455719673Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455732569Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455769299Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000917760, CONNECTING" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455773870Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.455910873Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000917760, READY" module=grpc
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.471914502Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.477755228Z" level=info msg="Loading containers: start."
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.539713394Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.563088971Z" level=info msg="Loading containers: done."
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.575024215Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.575071481Z" level=info msg="Daemon has completed initialization"
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.587258893Z" level=info msg="API listen on /var/run/docker.sock"
Aug 07 06:46:55 minikube dockerd[355]: time="2020-08-07T06:46:55.587278407Z" level=info msg="API listen on [::]:2376"
Aug 07 06:46:55 minikube systemd[1]: Started Docker Application Container Engine.
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.022114409Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.022183064Z" level=warning msg="d51563a2ee7b34d27c3ea42a55290e2f4355f4e6ecaca1dc55ce59d63dc58602 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d51563a2ee7b34d27c3ea42a55290e2f4355f4e6ecaca1dc55ce59d63dc58602/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.151523956Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.151580525Z" level=warning msg="0115222f9274b3fd019e9a84032286af3f9b64234c5ecebea6414eda8bd783a3 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0115222f9274b3fd019e9a84032286af3f9b64234c5ecebea6414eda8bd783a3/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.282221725Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.282293090Z" level=warning msg="6bec1ebe91571fce7ef93a868f50fb9e6c99b8776e27ed54a9e4284cb9297203 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6bec1ebe91571fce7ef93a868f50fb9e6c99b8776e27ed54a9e4284cb9297203/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.408293368Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.408349010Z" level=warning msg="6343a0f1f21eda8622b56a02d59b30b1945cd34493ac0d32ea4b3ef5ba544cce cleanup: failed to unmount IPC: umount /var/lib/docker/containers/6343a0f1f21eda8622b56a02d59b30b1945cd34493ac0d32ea4b3ef5ba544cce/mounts/shm, flags: 0x2: no such file or directory"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.533179090Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.663083941Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.793984173Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 06:49:14 minikube dockerd[355]: time="2020-08-07T06:49:14.925071292Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
2ae7c7eccbc56       303ce5db0e90d       3 minutes ago       Running             etcd                      0                   2d94ba24f8be2
c250496d6e4d6       76216c34ed0c7       3 minutes ago       Running             kube-scheduler            0                   fae6d2075cee7
181d95cbe8d9a       da26705ccb4b5       3 minutes ago       Running             kube-controller-manager   0                   ec20a4748bfcb
4bf00b5401eeb       7e28efa976bd1       3 minutes ago       Running             kube-apiserver            0                   7ce60b8ac14bc

==> describe nodes <==
No resources found in default namespace.

==> dmesg <==
[Aug 7 05:58] ACPI: RSDP 00000000000f04a0 00024 (v02 ALASKA)
[  +0.000000] ACPI: XSDT 00000000dd9e9098 000B4 (v01 ALASKA    A M I 01072009 AMI  00010013)
[  +0.000000] ACPI: FACP 00000000dd9f5b80 0010C (v05 ALASKA    A M I 01072009 AMI  00010013)
[  +0.000000] ACPI: DSDT 00000000dd9e91e0 0C99A (v02 ALASKA    A M I 00000A12 INTL 20120711)
[  +0.000000] ACPI: FACS 00000000ddb5e080 00040
[  +0.000000] ACPI: APIC 00000000dd9f5c90 00092 (v03 ALASKA    A M I 01072009 AMI  00010013)
[  +0.000000] ACPI: FPDT 00000000dd9f5d28 00044 (v01 ALASKA    A M I 01072009 AMI  00010013)
[  +0.000000] ACPI: SSDT 00000000dd9f5d70 00539 (v01  PmRef  Cpu0Ist 00003000 INTL 20120711)
[  +0.000000] ACPI: SSDT 00000000dd9f62b0 00AD8 (v01  PmRef    CpuPm 00003000 INTL 20120711)
[  +0.000000] ACPI: SSDT 00000000dd9f6d88 002DE (v01  PmRef  Cpu0Tst 00003000 INTL 20120711)
[  +0.000000] ACPI: SSDT 00000000dd9f7068 003BF (v01  PmRef    ApTst 00003000 INTL 20120711)
[  +0.000000] ACPI: MCFG 00000000dd9f7428 0003C (v01 ALASKA    A M I 01072009 MSFT 00000097)
[  +0.000000] ACPI: PRAD 00000000dd9f7468 000BE (v02 PRADID  PRADTID 00000001 MSFT 03000001)
[  +0.000000] ACPI: HPET 00000000dd9f7528 00038 (v01 ALASKA    A M I 01072009 AMI. 00000005)
[  +0.000000] ACPI: SSDT 00000000dd9f7560 0036D (v01 SataRe SataTabl 00001000 INTL 20120711)
[  +0.000000] ACPI: SSDT 00000000dd9f78d0 03528 (v01 SaSsdt  SaSsdt  00003000 INTL 20091112)
[  +0.000000] ACPI: SPMI 00000000dd9fadf8 00040 (v05 A M I   OEMSPMI 00000000 AMI. 00000000)
[  +0.000000] ACPI: DMAR 00000000dd9fae38 00070 (v01 INTEL      HSW  00000001 INTL 00000001)
[  +0.000000] ACPI: EINJ 00000000dd9faea8 00130 (v01    AMI AMI EINJ 00000000      00000000)
[  +0.000000] ACPI: ERST 00000000dd9fafd8 00230 (v01  AMIER AMI ERST 00000000      00000000)
[  +0.000000] ACPI: HEST 00000000dd9fb208 000A8 (v01    AMI AMI HEST 00000000      00000000)
[  +0.000000] ACPI: BERT 00000000dd9fb2b0 00030 (v01    AMI AMI BERT 00000000      00000000)
[  +0.000000] Zone ranges:
[  +0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[  +0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[  +0.000000]   Normal   [mem 0x100000000-0x81fffffff]
[  +0.000000] Movable zone start for each node
[  +0.000000] Early memory node ranges
[  +0.000000]   node   0: [mem 0x00001000-0x00098fff]
[  +0.000000]   node   0: [mem 0x00100000-0xccc2dfff]
[  +0.000000]   node   0: [mem 0xccc35000-0xdd947fff]
[  +0.000000]   node   0: [mem 0xdf7ff000-0xdf7fffff]
[  +0.000000]   node   0: [mem 0x100000000-0x81fffffff]
[  +0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 8247647
[  +0.000000] Policy zone: Normal
[  +0.000000] ACPI: All ACPI Tables successfully acquired
[  +0.039364] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[  +0.000053]  #5 #6 #7 OK
[  +0.196114] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.
[  +0.097926] ACPI: Dynamic OEM Table Load:
[  +0.000005] ACPI: SSDT ffff9935b40d5000 003D3 (v01  PmRef  Cpu0Cst 00003001 INTL 20120711)
[  +0.000459] ACPI: Dynamic OEM Table Load:
[  +0.000004] ACPI: SSDT ffff993c5f782000 005AA (v01  PmRef    ApIst 00003000 INTL 20120711)
[  +0.000403] ACPI: Dynamic OEM Table Load:
[  +0.000003] ACPI: SSDT ffff9935b40f9c00 00119 (v01  PmRef    ApCst 00003000 INTL 20120711)
[  +0.001182] ACPI: GPE 0x24 active on init
[  +0.000006] ACPI: Enabled 7 GPEs in block 00 to 3F
[  +0.000105] ACPI Error: [\_SB_.PRAD] Namespace lookup failure, AE_NOT_FOUND (20130517/psargs-359)
[  +0.000004] ACPI Error: Method parse/execution failed [\_GPE._L24] (Node ffff9935b4ae0848), AE_NOT_FOUND (20130517/psparse-536)
[  +0.000004] ACPI Exception: AE_NOT_FOUND, while evaluating GPE method [_L24] (20130517/evgpe-633)
[  +0.381527] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[  +1.209562] i8042: No controller found
[  +2.589679] vboxdrv: loading out-of-tree module taints kernel.
[  +0.223532] VBoxNetFlt: Successfully started.
[  +0.001552] VBoxNetAdp: Successfully started.
[  +9.053943] TECH PREVIEW: Overlay filesystem may not be fully supported.
              Please review provided documentation for limitations.
[Aug 7 06:46] TECH PREVIEW: nf_tables may not be fully supported.
              Please review provided documentation for limitations.

==> etcd [2ae7c7eccbc5] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-08-07 06:49:23.752829 I | etcdmain: etcd Version: 3.4.3
2020-08-07 06:49:23.752855 I | etcdmain: Git SHA: 3cf2f69b5
2020-08-07 06:49:23.752857 I | etcdmain: Go Version: go1.12.12
2020-08-07 06:49:23.752861 I | etcdmain: Go OS/Arch: linux/amd64
2020-08-07 06:49:23.752864 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-08-07 06:49:23.752926 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-08-07 06:49:23.753485 I | embed: name = minikube
2020-08-07 06:49:23.753493 I | embed: data dir = /var/lib/minikube/etcd
2020-08-07 06:49:23.753497 I | embed: member dir = /var/lib/minikube/etcd/member
2020-08-07 06:49:23.753501 I | embed: heartbeat = 100ms
2020-08-07 06:49:23.753504 I | embed: election = 1000ms
2020-08-07 06:49:23.753508 I | embed: snapshot count = 10000
2020-08-07 06:49:23.753517 I | embed: advertise client URLs = https://172.17.0.3:2379
2020-08-07 06:49:23.760282 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2
raft2020/08/07 06:49:23 INFO: b273bc7741bcb020 switched to configuration voters=()
raft2020/08/07 06:49:23 INFO: b273bc7741bcb020 became follower at term 0
raft2020/08/07 06:49:23 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/08/07 06:49:23 INFO: b273bc7741bcb020 became follower at term 1
raft2020/08/07 06:49:23 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-08-07 06:49:23.762813 W | auth: simple token is not cryptographically signed
2020-08-07 06:49:23.764119 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-08-07 06:49:23.766703 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-08-07 06:49:23.766871 I | embed: listening for metrics on http://127.0.0.1:2381
2020-08-07 06:49:23.767178 I | embed: listening for peers on 172.17.0.3:2380
2020-08-07 06:49:23.767253 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/08/07 06:49:23 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)
2020-08-07 06:49:23.836476 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2
raft2020/08/07 06:49:24 INFO: b273bc7741bcb020 is starting a new election at term 1
raft2020/08/07 06:49:24 INFO: b273bc7741bcb020 became candidate at term 2
raft2020/08/07 06:49:24 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2
raft2020/08/07 06:49:24 INFO: b273bc7741bcb020 became leader at term 2
raft2020/08/07 06:49:24 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2
2020-08-07 06:49:24.261137 I | etcdserver: setting up the initial cluster version to 3.4
2020-08-07 06:49:24.261386 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2
2020-08-07 06:49:24.261408 I | embed: ready to serve client requests
2020-08-07 06:49:24.261455 N | etcdserver/membership: set the initial cluster version to 3.4
2020-08-07 06:49:24.261516 I | embed: ready to serve client requests
2020-08-07 06:49:24.261532 I | etcdserver/api: enabled capabilities for version 3.4
2020-08-07 06:49:24.263199 I | embed: serving client requests on 172.17.0.3:2379
2020-08-07 06:49:24.263474 I | embed: serving client requests on 127.0.0.1:2379

==> kernel <==
 06:52:51 up 54 min,  0 users,  load average: 0.15, 0.15, 0.09
Linux minikube 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [4bf00b5401ee] <==
I0807 06:49:25.577251       1 client.go:361] parsed scheme: "endpoint"
I0807 06:49:25.577266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0807 06:49:25.640814       1 client.go:361] parsed scheme: "endpoint"
I0807 06:49:25.640848       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
W0807 06:49:25.681388       1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources.
W0807 06:49:25.688015       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0807 06:49:25.696645       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0807 06:49:25.741407       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0807 06:49:25.744093       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0807 06:49:25.755080       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0807 06:49:25.770106       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0807 06:49:25.770120       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0807 06:49:25.777200       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0807 06:49:25.777221       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0807 06:49:25.778424       1 client.go:361] parsed scheme: "endpoint"
I0807 06:49:25.778441       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0807 06:49:25.793331       1 client.go:361] parsed scheme: "endpoint"
I0807 06:49:25.793374       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I0807 06:49:27.229329       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0807 06:49:27.229330       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0807 06:49:27.229396       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0807 06:49:27.229864       1 secure_serving.go:178] Serving securely on [::]:8443
I0807 06:49:27.229957       1 available_controller.go:387] Starting AvailableConditionController
I0807 06:49:27.229963       1 controller.go:81] Starting OpenAPI AggregationController
I0807 06:49:27.229972       1 autoregister_controller.go:141] Starting autoregister controller
I0807 06:49:27.229970       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0807 06:49:27.229980       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0807 06:49:27.229985       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0807 06:49:27.229968       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0807 06:49:27.230001       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0807 06:49:27.230009       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0807 06:49:27.230016       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0807 06:49:27.230416       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0807 06:49:27.230426       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0807 06:49:27.230455       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0807 06:49:27.230486       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0807 06:49:27.230869       1 crd_finalizer.go:266] Starting CRDFinalizer
I0807 06:49:27.231444       1 controller.go:86] Starting OpenAPI controller
I0807 06:49:27.231471       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0807 06:49:27.231492       1 naming_controller.go:291] Starting NamingConditionController
I0807 06:49:27.231507       1 establishing_controller.go:76] Starting EstablishingController
I0807 06:49:27.231525       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0807 06:49:27.231543       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0807 06:49:27.232905       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: 
I0807 06:49:27.330166       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0807 06:49:27.330213       1 cache.go:39] Caches are synced for autoregister controller
I0807 06:49:27.330760       1 shared_informer.go:230] Caches are synced for crd-autoregister 
I0807 06:49:27.330778       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
I0807 06:49:27.336333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0807 06:49:28.229416       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0807 06:49:28.229466       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0807 06:49:28.245646       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0807 06:49:28.250328       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0807 06:49:28.250359       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0807 06:49:28.480540       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0807 06:49:28.497013       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0807 06:49:28.564966       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.3]
I0807 06:49:28.565686       1 controller.go:606] quota admission added evaluator for: endpoints
I0807 06:49:28.567541       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0807 06:49:29.508810       1 controller.go:606] quota admission added evaluator for: serviceaccounts

==> kube-controller-manager [181d95cbe8d9] <==
I0807 06:49:33.907891       1 pv_protection_controller.go:83] Starting PV protection controller
I0807 06:49:33.907896       1 shared_informer.go:223] Waiting for caches to sync for PV protection
I0807 06:49:34.157957       1 controllermanager.go:533] Started "endpoint"
I0807 06:49:34.158009       1 endpoints_controller.go:182] Starting endpoint controller
I0807 06:49:34.158018       1 shared_informer.go:223] Waiting for caches to sync for endpoint
I0807 06:49:34.407902       1 controllermanager.go:533] Started "daemonset"
I0807 06:49:34.407921       1 daemon_controller.go:257] Starting daemon sets controller
I0807 06:49:34.407930       1 shared_informer.go:223] Waiting for caches to sync for daemon sets
E0807 06:49:34.657921       1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0807 06:49:34.657935       1 controllermanager.go:525] Skipping "service"
I0807 06:49:35.562467       1 garbagecollector.go:133] Starting garbage collector controller
I0807 06:49:35.562482       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0807 06:49:35.562498       1 graph_builder.go:282] GraphBuilder running
I0807 06:49:35.562553       1 controllermanager.go:533] Started "garbagecollector"
I0807 06:49:35.574951       1 controllermanager.go:533] Started "disruption"
I0807 06:49:35.575030       1 disruption.go:331] Starting disruption controller
I0807 06:49:35.575037       1 shared_informer.go:223] Waiting for caches to sync for disruption
I0807 06:49:35.707923       1 controllermanager.go:533] Started "statefulset"
I0807 06:49:35.707963       1 stateful_set.go:146] Starting stateful set controller
I0807 06:49:35.707970       1 shared_informer.go:223] Waiting for caches to sync for stateful set
I0807 06:49:35.958561       1 controllermanager.go:533] Started "podgc"
I0807 06:49:35.958602       1 gc_controller.go:89] Starting GC controller
I0807 06:49:35.958609       1 shared_informer.go:223] Waiting for caches to sync for GC
I0807 06:49:36.207865       1 controllermanager.go:533] Started "bootstrapsigner"
I0807 06:49:36.207888       1 core.go:239] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0807 06:49:36.207896       1 controllermanager.go:525] Skipping "route"
I0807 06:49:36.208153       1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0807 06:49:36.207888       1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer
I0807 06:49:36.234192       1 shared_informer.go:230] Caches are synced for deployment 
I0807 06:49:36.249073       1 shared_informer.go:230] Caches are synced for HPA 
I0807 06:49:36.258161       1 shared_informer.go:230] Caches are synced for PVC protection 
I0807 06:49:36.258258       1 shared_informer.go:230] Caches are synced for ReplicaSet 
I0807 06:49:36.275215       1 shared_informer.go:230] Caches are synced for disruption 
I0807 06:49:36.275233       1 disruption.go:339] Sending events to api server.
I0807 06:49:36.307997       1 shared_informer.go:230] Caches are synced for PV protection 
I0807 06:49:36.308086       1 shared_informer.go:230] Caches are synced for stateful set 
I0807 06:49:36.308094       1 shared_informer.go:230] Caches are synced for expand 
I0807 06:49:36.308422       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
I0807 06:49:36.313788       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
I0807 06:49:36.358007       1 shared_informer.go:230] Caches are synced for ReplicationController 
I0807 06:49:36.358015       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
I0807 06:49:36.407604       1 shared_informer.go:230] Caches are synced for TTL 
I0807 06:49:36.408134       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
I0807 06:49:36.408223       1 shared_informer.go:230] Caches are synced for attach detach 
I0807 06:49:36.458107       1 shared_informer.go:230] Caches are synced for persistent volume 
I0807 06:49:36.458698       1 shared_informer.go:230] Caches are synced for GC 
I0807 06:49:36.623283       1 shared_informer.go:230] Caches are synced for job 
I0807 06:49:36.758091       1 shared_informer.go:230] Caches are synced for endpoint 
I0807 06:49:36.808229       1 shared_informer.go:230] Caches are synced for endpoint_slice 
I0807 06:49:36.809582       1 shared_informer.go:230] Caches are synced for resource quota 
I0807 06:49:36.811123       1 shared_informer.go:230] Caches are synced for namespace 
I0807 06:49:36.818536       1 shared_informer.go:230] Caches are synced for service account 
I0807 06:49:36.858020       1 shared_informer.go:230] Caches are synced for taint 
I0807 06:49:36.858099       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0807 06:49:36.862617       1 shared_informer.go:230] Caches are synced for garbage collector 
I0807 06:49:36.862625       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0807 06:49:36.908000       1 shared_informer.go:230] Caches are synced for daemon sets 
I0807 06:49:36.908267       1 shared_informer.go:230] Caches are synced for resource quota 
I0807 06:49:37.059117       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0807 06:49:37.059173       1 shared_informer.go:230] Caches are synced for garbage collector 

==> kube-scheduler [c250496d6e4d] <==
I0807 06:49:23.850397       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0807 06:49:23.850439       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0807 06:49:24.476953       1 serving.go:313] Generated self-signed cert in-memory
W0807 06:49:27.247998       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0807 06:49:27.248171       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0807 06:49:27.248242       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0807 06:49:27.248301       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0807 06:49:27.261426       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0807 06:49:27.261445       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0807 06:49:27.262506       1 authorization.go:47] Authorization is disabled
W0807 06:49:27.262513       1 authentication.go:40] Authentication is disabled
I0807 06:49:27.262525       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0807 06:49:27.263316       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0807 06:49:27.263337       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0807 06:49:27.263573       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0807 06:49:27.263615       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0807 06:49:27.264442       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0807 06:49:27.265694       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0807 06:49:27.265855       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0807 06:49:27.265941       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0807 06:49:27.266027       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0807 06:49:27.266117       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0807 06:49:27.266585       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0807 06:49:27.265904       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0807 06:49:27.265969       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0807 06:49:28.337208       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0807 06:49:28.763446       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Fri 2020-08-07 06:46:50 UTC, end at Fri 2020-08-07 06:52:51 UTC. --
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.120020    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.220204    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.320384    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.420586    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.520782    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.590069    3213 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: Bad Gateway
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.620967    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.721121    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.821264    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:46 minikube kubelet[3213]: E0807 06:52:46.921439    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.021613    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.121839    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.222024    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.322207    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.422406    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.522615    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.622800    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.722995    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.823246    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:47 minikube kubelet[3213]: E0807 06:52:47.923371    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.023484    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.123625    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.223813    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.324010    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.424202    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.524410    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.624594    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.724781    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.825009    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:48 minikube kubelet[3213]: E0807 06:52:48.925186    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.025387    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.125583    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.225717    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.325904    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.426091    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.526219    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.626427    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.726593    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.826812    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:49 minikube kubelet[3213]: E0807 06:52:49.926994    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.027163    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.127281    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.227388    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.327514    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.427702    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.527897    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.628075    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.728189    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.828351    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:50 minikube kubelet[3213]: E0807 06:52:50.928580    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.028713    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.128849    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.228979    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.329102    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.429242    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.529383    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.629526    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.729684    3213 kubelet.go:2267] node "minikube" not found
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.807505    3213 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: Bad Gateway
Aug 07 06:52:51 minikube kubelet[3213]: E0807 06:52:51.829847    3213 kubelet.go:2267] node "minikube" not found

@medyagh medyagh changed the title proxy: Error writing Crisocket information for the control-plane node: timed out waiting for the condition docker driver: load kernel module: "configs" Sep 9, 2020
@medyagh medyagh changed the title docker driver: load kernel module: "configs" docker driver: unable to load kernel module: "configs" Sep 9, 2020
@medyagh
Copy link
Member

medyagh commented Sep 9, 2020

@croensch
@koji117
@okhwan your error doesn't seem to be related to proxy settings but it is talking about "configs" kernel module not being there.

do you mind pasting the output of this command

 modprobe configs

to see if that kernel module is in your system?

@medyagh
Copy link
Member

medyagh commented Sep 9, 2020

@croensch
@koji117
@okhwan

could you please share the output of

$ modprobe configs
$ echo $?

do you mind also sharing in which cloud do have the ubuntu 18.04 ?
I would like to replicate this issue,
can you also please share

uname -r

@tstromberg tstromberg changed the title docker driver: unable to load kernel module: "configs" kubeadm: FATAL: Module configs not found in directory /lib/modules/5.3.0-53-generic\n", err: exit status 1 Sep 23, 2020
@tstromberg
Copy link
Contributor

Also, I would like to see the output from your system of:

  • ls /lib/modules/5.3.0-53-generic
  • minikube ssh "ls /lib/modules/5.3.0-53-generic"
  • uname -a

This sounds to me like one of two types of problems:

  • The live kernel version no longer matches the contents of /lib/modules
  • /lib/modules isn't being mapped into Docker

It'd be nice to sort out which one it is.

@kppullin
Copy link
Contributor

FWIW, the module configs not found message may be misleading. I came across this issue when minikube failed to start on Ubuntu 20.04 by way of the FATAL: Module configs not found... message, but eventually found my root cause (#7923 (comment)).

Ubuntu seems to lack the configs module and places the config content elsewhere. This thread has more details: kubernetes-sigs/kind#61

@afbjorklund
Copy link
Collaborator

Kubeadm has a hardcoded list of paths to search for the kernel configuration file:
https://github.com/kubernetes/system-validators/blob/master/validators/kernel_validator.go#L180

There are two possible locations where it could have found it, but neither is mapped into KIC:

linux-modules-5.4.0-48-generic: /boot/config-5.4.0-48-generic

linux-headers-5.4.0-48-generic: /usr/src/linux-headers-5.4.0-48-generic/.config

The only config that we have under /lib/modules now, is a broken symlink to build/config.

The /boot directory is empty in the container, and there is no guarantee of a /usr/src on host.


To really fix this issue, it should mount the /boot directory as well. Most likely only the needed files?
Instead of mounting every available boot configuration and kernel (like now), just the $(uname -r)

e.g.

/boot/config-5.4.0-48-generic
/lib/modules/5.4.0-48-generic

The use of "FATAL" here is also misleading, apparently it can cope with the missing config just fine...

For instance in our own OS (minikube.iso), we don't have either of these directories available at runtime.
There is no /boot directory present at all, and we don't have any loadable modules (/lib/modules is empty)

But we did start with IKCONFIG (8e457d4)

/proc/config.gz

@medyagh
Copy link
Member

medyagh commented Oct 28, 2020

@afbjorklund I think we could add this folders to kic container, if ti is on linux (since on mac and windows we only deal with docker-machine's VM)
this is the Part of the code that we mount these folders

https://github.com/medyagh/minikube/blob/2c82918e2347188e21c4e44c8056fc80408bce10/pkg/drivers/kic/oci/oci.go#L138

we could add a case that only for linux we should mount more folders.
could we come up with smarter way of detecting which folders we need to mount based on the linux ?

@medyagh medyagh added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Oct 28, 2020
@medyagh medyagh changed the title kubeadm: FATAL: Module configs not found in directory /lib/modules/5.3.0-53-generic\n", err: exit status 1 Kic drivers: mount more linux folders into the container for unstandard /lib/modules/ Oct 28, 2020
@medyagh medyagh added co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. and removed kind/support Categorizes issue or PR as a support question. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. triage/needs-information Indicates an issue needs more information in order to work on it. labels Oct 28, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2021
@medyagh
Copy link
Member

medyagh commented Feb 18, 2021

is there a specific OS that has this problem?

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 20, 2021
@spowelljr spowelljr added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Apr 7, 2021
@medyagh medyagh added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Apr 28, 2021
@medyagh
Copy link
Member

medyagh commented Apr 28, 2021

@okhwan do u mind sharing what OS and what Linux verison were you using so maybe we could add an integration test for this?

@medyagh
Copy link
Member

medyagh commented Apr 28, 2021

@ilya-zuyev could not loading module config be related to our containerd

@OlgaMaciaszek
Copy link

OlgaMaciaszek commented Dec 12, 2022

@medyagh I'm facing an issue similar to the one described here on Fedora 37 Workstation Edition. Seems like a regression - this workaround has worked for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests