Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm tries to pull k8s.gcr.io/pause-amd64:3.1 instead of k8s.gcr.io/pause:3.1 #962

Closed
royxue opened this issue Jul 2, 2018 · 7 comments · Fixed by kubernetes/kubernetes#65920
Assignees
Labels
area/UX kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@royxue
Copy link

royxue commented Jul 2, 2018

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version:

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

What happened?

The kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 command hang on

[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled

I have pre downloaded all the docker images (version matched).

and I use docker ps -a, these is NO any kubernetes related docker at all

What you expected to happen?

init to be finished

How to reproduce it (as minimally and precisely as possible)?

I followed the instruction on the kubernetes tutorial, just kube init.

Anything else we need to know?

@royxue
Copy link
Author

royxue commented Jul 2, 2018

Problem solved.
The pause image I downloaded with name pause-amd64:3.1 cannot work, it should be pause:3.1

Suggest to add this information is the kubeadm init error log

@royxue royxue closed this as completed Jul 2, 2018
@neolit123 neolit123 reopened this Jul 6, 2018
@neolit123
Copy link
Member

@luxas
Copy link
Member

luxas commented Jul 6, 2018

This should be fixed in kubeadm, should be using k8s.gcr.io/pause:3.1 without the arch suffix.
/assign @chuckha @rosti
whoever of you that can fix this. I'd consider this for a cherrypick #958

@k8s-ci-robot
Copy link
Contributor

@luxas: GitHub didn't allow me to assign the following users: rosti.

Note that only kubernetes members and repo collaborators can be assigned.

In response to this:

This should be fixed in kubeadm, should be using k8s.gcr.io/pause:3.1 without the arch suffix.
/assign @chuckha @rosti
whoever of you that can fix this. I'd consider this for a cherrypick #958

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@luxas luxas added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. cherrypick-candidate labels Jul 6, 2018
@luxas luxas changed the title Kubeadm init: no docker started kubeadm tries to pull k8s.gcr.io/pause-amd64:3.1 instead of k8s.gcr.io/pause:3.1 Jul 6, 2018
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Jul 7, 2018
Automatic merge from submit-queue (batch tested with PRs 65946, 65904, 65913, 65906, 65920). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

kubeadm: pause image should be arch agnostic, as it is a manifest list

Signed-off-by: Davanum Srinivas <davanum@gmail.com>



**What this PR does / why we need it**:

`pause` image is backed by a manifest list. so we should not use the arch image when reporting using say `kubeadm config image list`

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes kubernetes/kubeadm#962

**Special notes for your reviewer**:

**Release note**:

```release-note
kubeadm: Fix pause image to not use architecture, as it is a manifest list
```
@bladerunner512
Copy link

I originally had 'k8s.gcr.io/pause:3.1' then I tried 'k8s.gcr.io/pause-amd:3.1' - neither image had any effect on this issue.

@royxue
Copy link
Author

royxue commented Jul 31, 2018

@bladerunner512 check your kubelet logs which tells you what need to be downloaded

@bladerunner512
Copy link

This does not appear to be an issue with local images missing. When looking at the running kubelet I noticed no args like there were in previous releases. This has changed in v1.11 from hard-coded parameters to structured config file. When I added back previous 10-kubeadm.conf from v1.10 to /etc/systemd/system/kubelet.service.d the installation proceeded correctly.

Previous 10-kubeadm.conf from v1.10:
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_CGROUP_ARGS $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

config.yaml from v1.11 in /var/lib/kubelet:
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:

  • 10.96.0.10
    clusterDomain: cluster.local
    containerLogMaxFiles: 5
    containerLogMaxSize: 10Mi
    contentType: application/vnd.kubernetes.protobuf
    cpuCFSQuota: true
    cpuManagerPolicy: none
    cpuManagerReconcilePeriod: 10s
    enableControllerAttachDetach: true
    enableDebuggingHandlers: true
    enforceNodeAllocatable:
  • pods
    eventBurst: 10
    eventRecordQPS: 5
    evictionHard:
    imagefs.available: 15%
    memory.available: 100Mi
    nodefs.available: 10%
    nodefs.inodesFree: 5%
    evictionPressureTransitionPeriod: 5m0s
    failSwapOn: true
    fileCheckFrequency: 20s
    hairpinMode: promiscuous-bridge
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 20s
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    imageMinimumGCAge: 2m0s
    iptablesDropBit: 15
    iptablesMasqueradeBit: 14
    kind: KubeletConfiguration
    kubeAPIBurst: 10
    kubeAPIQPS: 5
    makeIPTablesUtilChains: true
    maxOpenFiles: 1000000
    maxPods: 110
    nodeStatusUpdateFrequency: 10s
    oomScoreAdj: -999
    podPidsLimit: -1
    port: 10250
    registryBurst: 10
    registryPullQPS: 5
    resolvConf: /etc/resolv.conf
    rotateCertificates: true
    runtimeRequestTimeout: 2m0s
    serializeImagePulls: true
    staticPodPath: /etc/kubernetes/manifests
    streamingConnectionIdleTimeout: 4h0m0s
    syncFrequency: 1m0s
    volumeStatsAggPeriod: 1m0s

kubeadm-flags.env:
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/UX kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants