-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcdserver: make livez return ok when defrag is active. #16858
base: main
Are you sure you want to change the base?
Conversation
1535981
to
1c6b7b1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @siyuanfoundation!
1c6b7b1
to
fbd8089
Compare
fbd8089
to
f7a2eef
Compare
252f8d5
to
d635819
Compare
d635819
to
c1e5dae
Compare
c1e5dae
to
e4a0392
Compare
if srv.lg == nil { | ||
srv.lg = zap.NewNop() | ||
} | ||
s.Backend().SubscribeDefragNotifier(healthNotifier) | ||
return &authMaintenanceServer{srv, &AuthAdmin{s}} | ||
} | ||
|
||
func (ms *maintenanceServer) Defragment(ctx context.Context, sr *pb.DefragmentRequest) (*pb.DefragmentResponse, error) { | ||
ms.lg.Info("starting defragment") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why move notification from here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original notifier is called in grpc server.
- It is strange to have a http endpoint depend on grpc server calls. The cluster could be started with grpcEnabled=false.
- There could be 2 grpc servers, 1 for insecure and 1 for secure. Which one should be used to determine the http endpoint state?
Based on these reasons, I think it is better to move the notifiers to the backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, but this is unrelated an change. Still it doesn't solve the issue that there might be multiple calls to Defrag, that will be blocked on somewhere in internals. This still leaves us open to concurrency issue. Let's just use the notifier as it is and do a followup issue to fix notifier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh, it is very gnarly to pass notifiers between http and grpc, something like https://github.com/etcd-io/etcd/compare/main...siyuanfoundation:etcd:defrag2?expand=1.
How about I start a PR #16959 to move the notifiers to the backend first?
Please rebase this PR, I will take a look later. |
e4a0392
to
aad37ec
Compare
Signed-off-by: Siyuan Zhang <sizhang@google.com>
aad37ec
to
a652d02
Compare
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@siyuanfoundation: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.
part of the work for #16007
cc @chaochn47 @serathius