Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade Kubernetes from v1.26.12 to v1.27.0 with add-ons in a Single-Node Cluster Using Kubespray v2.25 #11498

Open
VrindaMarwah opened this issue Sep 2, 2024 · 5 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug.

Comments

@VrindaMarwah
Copy link

What happened?

I have a single-node cluster with Kubernetes v1.26.12 installed using Kubespray 2.23. I am trying to upgrade the Kubernetes version to v1.27.0 using upgrade_cluster.yml playbook from Kubespray v2.25, with the following variables passed in a file 'k8s_var.yml' (using -e flag):

kube_version: "v1.27.0"
deploy_container_engine: false
skip_http_proxy_on_os_packages: true
dashboard_enabled: false
helm_enabled: true
kube_network_plugin: "calico"
kube_service_addresses: "10.233.0.0/18"
kube_pods_subnet: "10.233.64.0/18"
metallb_enabled: true
metallb_speaker_enabled: true

metallb_namespace: "metallb-system"
kube_proxy_strict_arp: true
kube_proxy_mode: 'iptables'
metallb_config:
address_pools:
primary:
ip_range:
- "10.11.0.100-10.11.0.150"
auto_assign: true
layer2:
- primary

However, the playbook fails with the below error:

image

The metallb-speaker pod is running but metallb-controller pod is in Pending state.
image

Additionally, metallb-controller pod events reports:
0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling
(I am assuming since the node is in cordoned state, it is marked as unavailable for scheduling new pods. Hence the above error)

What did you expect to happen?

The upgrade_cluster.yml should have executed successfully, when metallb is enabled on the cluster.

How can we reproduce it (as minimally and precisely as possible)?

  1. Deploy Kubernetes v1.26.12 (using Kubespray 2.23) on a single-node cluster.
  2. Clone Kubespray 2.25 source code on the node
  3. Create a vars file - 'k8s_var.yml' and keep the following variables:

kube_version: "v1.27.0"
deploy_container_engine: false
skip_http_proxy_on_os_packages: true
dashboard_enabled: false
helm_enabled: true
kube_network_plugin: "calico"
kube_service_addresses: "10.233.0.0/18"
kube_pods_subnet: "10.233.64.0/18"
metallb_enabled: true
metallb_speaker_enabled: true

metallb_namespace: "metallb-system"
kube_proxy_strict_arp: true
kube_proxy_mode: 'iptables'
metallb_config:
address_pools:
primary:
ip_range:
- "10.11.0.100-10.11.0.150"
auto_assign: true
layer2:
- primary

  1. Create an inventory - k8s_inv.ini as shown below:

[kube_control_plane]
localhost ansible_connection=local

[kube_node]
localhost ansible_connection=local

[etcd]
localhost ansible_connection=local

[k8s_cluster]
localhost ansible_connection=local

  1. Run the upgrade_cluster.yml playbook using below command:
    ansible-playbook upgrade-cluster.yml -b -i /k8s_inv.ini -e @/k8s_var.yml

OS

Ubuntu 22.04.3 LTS
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Version of Ansible

ansible [core 2.16.10]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.11/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0] (/usr/bin/python3.11)
jinja version = 3.1.4
libyaml = False

Version of Python

3.11.9

Version of Kubespray (commit)

0d09b19

Network plugin used

calico

Full inventory with variables

Trying to upgrade kubernetes version running on the control plane using below inventory:

[kube_control_plane]
localhost ansible_connection=local

[kube_node]
localhost ansible_connection=local

[etcd]
localhost ansible_connection=local

[k8s_cluster]
localhost ansible_connection=local

Command used to invoke ansible

ansible-playbook upgrade-cluster.yml -b -i /root/inv_file/k8s_inv.ini -e @/root/k8s_var.yml

Output of ansible run

image

Anything else we need to know

Can you please provide a workaround for this issue?

@VrindaMarwah VrindaMarwah added the kind/bug Categorizes issue or PR as related to a bug. label Sep 2, 2024
@tico88612
Copy link
Member

tico88612 commented Sep 3, 2024

The node will be cordoned when upgrading, and a new Pod may pending when deployed during the cordon if there is only one node.

Can you print kubectl get node result?

@tico88612
Copy link
Member

/retitle Upgrade Kubernetes from v1.26.12 to v1.27.0 in a Single-Node Cluster Using Kubespray v2.25

@k8s-ci-robot k8s-ci-robot changed the title upgrade_cluster.yml playbook fails when metallb_enabled and metallb_speaker_enabled is set to 'true' in k8s_cluster/addons.yml Upgrade Kubernetes from v1.26.12 to v1.27.0 in a Single-Node Cluster Using Kubespray v2.25 Sep 3, 2024
@VrindaMarwah
Copy link
Author

VrindaMarwah commented Sep 3, 2024

@tico88612 agreed that the node will be cordoned while upgrading and new pods cannot be scheduled on it. But how do we proceed forward in case of a single node cluster?

Below is the kubectl get node output:

root@mycp:~/kubespray# kubectl get nodes

NAME STATUS ROLES AGE VERSION
localhost Ready,SchedulingDisabled control-plane 136m v1.27.0

@tico88612
Copy link
Member

This should be a known issue; someone needs to help with the single-node update process.
However, you can start with the uncordon node to see if it works since the MetalLB version has not changed.

/retitle Upgrade Kubernetes from v1.26.12 to v1.27.0 with add-ons in a Single-Node Cluster Using Kubespray v2.25
/help

@k8s-ci-robot
Copy link
Contributor

@tico88612:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

This should be a known issue; someone needs to help with the single-node update process.
However, you can start with the uncordon node to see if it works since the MetalLB version has not changed.

/retitle Upgrade Kubernetes from v1.26.12 to v1.27.0 with add-ons in a Single-Node Cluster Using Kubespray v2.25
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot changed the title Upgrade Kubernetes from v1.26.12 to v1.27.0 in a Single-Node Cluster Using Kubespray v2.25 Upgrade Kubernetes from v1.26.12 to v1.27.0 with add-ons in a Single-Node Cluster Using Kubespray v2.25 Sep 3, 2024
@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants