Skip to content
This repository has been archived by the owner on Jan 24, 2023. It is now read-only.

Cluster broken after upgrade to K8s 1.8.9 #112

Open
tomkosse opened this issue Mar 20, 2018 · 1 comment
Open

Cluster broken after upgrade to K8s 1.8.9 #112

tomkosse opened this issue Mar 20, 2018 · 1 comment

Comments

@tomkosse
Copy link

Is this a request for help?:
Yes!


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

This is a bug report
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes (originally 1.7.x, but upgraded to 1.8.4 a while ago. Today i upgraded to 1.8.9)

What happened:
All my three masters stopped working. etcd is unable to start and form a cluster. etcd was still working in 2.3, but now tries to resume in version 3.2.16. This is unsuccessful because the snapshots are not compatible.

What you expected to happen:
A working cluster

How to reproduce it (as minimally and precisely as possible):
I'm not exactly sure what causes this to go wrong, but i think i skipped a step in upgrading etcd from 2.3 to 3.2. This direct upgrade path is not straightforward or supported by etcd.

Anything else we need to know:
I'm on Azure Germany.

I've tried installing etcd 3.0.17 first and upgrading my way up. This has not succeeded, because the masters don't seem to be able to "talk" to eachother on this version.

@tomkosse
Copy link
Author

This should have been on the ACS-Engine repository. My apologies

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant