Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue#398 that decreasing replicas will make zookeeper unrecoverable when zookeeper not running. #406

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

stop-coding
Copy link

Change log description
Fixes the bug that decreasing replicas will make zookeeper unrecoverable when zookeeper not running.

Purpose of the change
Fixes #398

What the code does
Add protection for setting Stateful when zookeeper not running.
If zookeeper not running, we will prohibited to update replicas status until zookeeper resume.
When user decrease replicas value, it will remove node with reconfig firstly.
Keep do that remove node with reconfig on preStop before pod exit.

How to verify it
Create an cluster that size is 3 (kubectl create -f zk.yaml).
Wait all pod running, named: zk-0\zk-1\zk-2.
Delete zk-1\zk-2 pod, it make cluster of zookeeper unable to provide services.
"kubectl edit zk" that change replicas to 1 immediately.
Wait some time, replicas will decrease to 1.
Now, checking that:
Is zk-0 is all right?

hongchunhua added 5 commits October 21, 2021 10:24
…make cluster of zookeeper Unrecoverable

Signed-off-by: hongchunhua <hongchunhua@ruijie.com>
…recoverable when zookeeper not running.

Signed-off-by: hongchunhua <hongchunhua@ruijie.com>
Signed-off-by: hongchunhua <hongchunhua@ruijie.com>
Signed-off-by: hongchunhua <hongchunhua@ruijie.com>
Signed-off-by: hongchunhua <hongchunhua@ruijie.com>
@codecov
Copy link

codecov bot commented Oct 21, 2021

Codecov Report

Merging #406 (1a6ae84) into master (bfc3277) will decrease coverage by 0.07%.
The diff coverage is 85.18%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #406      +/-   ##
==========================================
- Coverage   84.11%   84.04%   -0.08%     
==========================================
  Files          12       12              
  Lines        1643     1667      +24     
==========================================
+ Hits         1382     1401      +19     
- Misses        177      185       +8     
+ Partials       84       81       -3     
Impacted Files Coverage Δ
pkg/zk/zookeeper_client.go 82.50% <77.77%> (-1.38%) ⬇️
...er/zookeepercluster/zookeepercluster_controller.go 63.50% <88.88%> (+0.47%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bfc3277...1a6ae84. Read the comment docs.

hongchunhua and others added 2 commits November 15, 2021 14:50
Signed-off-by: hongchunhua <hongchunhua@ruijie.com>
}
// The node that have been removed with reconfig also can still provide services for all online clients.
// So We can remove it firstly, it will avoid to error that client can't connect to server on preStop.
r.log.Info("Do reconfig to remove node.", "Remove ids", strings.Join(removes, ","))
Copy link
Contributor

@anishakj anishakj Nov 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check above returns error at line no: 303 ensures zookeeper is running.
Later in teardown script remove operation is performed.(https://github.com/pravega/zookeeper-operator/blob/master/docker/bin/zookeeperTeardown.sh#L45.) Do you still think removing node is required here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's no matter removing node on teardown script, do it or not will not affect cluster.
But if only do reconfig on teardown script, it will not chance to retry doing reconfig after pod exited when zookeeper is unserviceable.
So I think it may be better to do reconfig on checking the cluster scale down.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stop-coding Did you see Do reconfig to remove node. message being present in the logs in your use case?

I think @anishakj is suggesting that catching the UpdateNode error and returning on line 303 should be enough to fix the issue, hence lines 305 to 324 would never get executed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkhalack Sorry for my late reply.
Only catching the UpdateNode error is not enough to ensure do reconfigure successfully on preStop, for example that pod exiting but the cluster broken again. We indeed hope that updating node size and doing reconfigure is atomicity, but it's not realistic.

It known that updating "Spec.Replicas" to k8s will tell pod to create or exit. If it fail on scale down, we can stop updating "Spec.Replicas" until cluster recovery, that will ensure have done reconfigure before pod exit.

So I think that doing reconfigure on checking the cluster scale down is better.

@ranyhb
Copy link

ranyhb commented Feb 28, 2023

i would like to know if this pull request will be included in the next release?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

If zookeeper is not running, then decreasing replicas will make cluster of zookeeper Unrecoverable
4 participants