Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to Start Minikube on M2 Macbook #16052

Closed
cmennens opened this issue Mar 15, 2023 · 9 comments
Closed

Unable to Start Minikube on M2 Macbook #16052

cmennens opened this issue Mar 15, 2023 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@cmennens
Copy link

cmennens commented Mar 15, 2023

Error I'm currently getting after several cleanup attempts and verifying I have the OS X ARM64 version of Minikube:

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'`
stderr:
W0314 17:42:41.127761 20171 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start

I've seen this come up previously as an issue but the solutions didn't fix it for me. I'm on OS X Ventura (13.2.1).

I've installed Minikube specifically for the ARM64 Macbook M2 (Darwin) which worked fine previously on my other M1 (ARM64) MBP. I'm using the latest version of Docker and can deploy docker containers fine on my system but Minikube fails consistently.

Minikube version:

❯ minikube version minikube version: v1.29.0 commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3

Delete & purging my profile from previous attempts and deleting the docker image to be sure:

`❯ minikube delete --all --purge
🔥 Deleting "minikube" in docker ...
🔥 Removing /Users/carlos.mennens/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
💀 Successfully purged minikube directory located at - [/Users/carlos.mennens/.minikube]
📌 Kicbase images have not been deleted. To delete images run:
▪ docker rmi gcr.io/k8s-minikube/kicbase:v0.0.37

❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

❯ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/k8s-minikube/kicbase v0.0.37 55c37b5c9b24 6 weeks ago 1.06GB

❯ docker image rm 55c37b5c9b24
Untagged: gcr.io/k8s-minikube/kicbase:v0.0.37
Untagged: gcr.io/k8s-minikube/kicbase@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
Deleted: sha256:55c37b5c9b24a928f64d481d3889d5b2fdae3981810be425c2447f227e0e593b`

Checking my current setup:
`❯ minikube profile list

🤹 Exiting due to MK_USAGE_NO_PROFILE: No minikube profile was found.
💡 Suggestion:

You can create one using 'minikube start'.`

Re-attempting to start Minikube on my M2 MBP:

`❯ which minikube
/usr/local/bin/minikube

❯ minikube start
😄 minikube v1.29.0 on Darwin 13.2.1 (arm64)
✨ Automatically selected the docker driver
📌 Using Docker Desktop driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.26.1 preload ...
> preloaded-images-k8s-v18-v1...: 330.51 MiB / 330.51 MiB 100.00% 38.86 M
> gcr.io/k8s-minikube/kicbase...: 368.75 MiB / 368.75 MiB 100.00% 18.27 M
🔥 Creating docker container (CPUs=2, Memory=4000MB) ...
❗ This container is having trouble accessing https://registry.k8s.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...`

^^^
I eventually get the same error as I posted above. Nothing appears to be working for me and not sure why.

`❯ minikube status
E0315 12:01:47.651751 98398 status.go:415] kubeconfig endpoint: extract IP: "minikube" does not appear in /Users/carlos.mennens/.kube/config
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Misconfigured

WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run minikube update-context

@danielserrao
Copy link

Got the same issue and was able to fix it with the workaround mentioned at #16073.

But I still think that this should be considered a bug because it should work by default with minikube start, like it was recently to make everyone lifes easier.

@cmennens
Copy link
Author

That fix appeared to work for me as well:

minikube delete --all --purge

minikube start --driver=docker --force --extra-config=kubelet.cgroup-driver=systemd --cni calico --container-runtime=containerd --registry-mirror=https://registry.docker-cn.com

@cmennens
Copy link
Author

Is there any logical reasoning to my initial error above? Why does Minikube not work as expected with minikube start?

The above command I posted 2 days ago fails for me now:

minikube start --driver=docker --force --extra-config=kubelet.cgroup-driver=systemd --cni calico --container-runtime=containerd --registry-mirror=https://registry.docker-cn.com 😄 minikube v1.29.0 on Darwin 13.2.1 (arm64) Configuring Calico (Container Networking Interface) ... 💢 initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: ** stderr ** error when retrieving current configuration of: Resource: "policy/v1, Resource=poddisruptionbudgets", GroupVersionKind: "policy/v1, Kind=PodDisruptionBudget" Name: "calico-kube-controllers", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/policy/v1/namespaces/kube-system/poddisruptionbudgets/calico-kube-controllers": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=53, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "calico-kube-controllers", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/calico-kube-controllers": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "calico-node", Namespace: "kube-system"

Can someone explain what the issue is here? It worked fine previously on my older M1 ARM64 based MBP. Now it just constantly fails. Anyone know why I can no loner run Minikube anymore?

@Sahilgr8
Copy link

Is there any logical reasoning to my initial error above? Why does Minikube not work as expected with minikube start?

The above command I posted 2 days ago fails for me now:

minikube start --driver=docker --force --extra-config=kubelet.cgroup-driver=systemd --cni calico --container-runtime=containerd --registry-mirror=https://registry.docker-cn.com 😄 minikube v1.29.0 on Darwin 13.2.1 (arm64) Configuring Calico (Container Networking Interface) ... 💢 initialization failed, will try again: apply cni: cni apply: cmd: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml output: ** stderr ** error when retrieving current configuration of: Resource: "policy/v1, Resource=poddisruptionbudgets", GroupVersionKind: "policy/v1, Kind=PodDisruptionBudget" Name: "calico-kube-controllers", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/apis/policy/v1/namespaces/kube-system/poddisruptionbudgets/calico-kube-controllers": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=53, ErrCode=NO_ERROR, debug="" error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "calico-kube-controllers", Namespace: "kube-system" from server for: "/var/tmp/minikube/cni.yaml": Get "https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/calico-kube-controllers": dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount" Name: "calico-node", Namespace: "kube-system"

Can someone explain what the issue is here? It worked fine previously on my older M1 ARM64 based MBP. Now it just constantly fails. Anyone know why I can no loner run Minikube anymore?

I too am facing the same problem while trying to start minikube on my Macbook Pro with M1 chip

@SharangC96
Copy link

same issue, the provided command doesn't work for me

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants