-
Notifications
You must be signed in to change notification settings - Fork 24.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible improvement for orderly shutdown #4248
Comments
See also #4043 please |
yes, but with a rolling restart you at least know why you go through the pain where with cluster restart (say for full upgrade) the pain is unjustified :-) It would be nice if restarting a node or entire cluster on purpose were slightly different from what ES does on node crash and optimized for quick recovery and ease of use. We develop but we do not manage production ES and I feel the "average" sys admins are honestly not ready for the touchiness and complexity of current operational support. they would need to spend days on the mailing lists and bag their heads against problems before they can support ES with confidence |
Yeah, I'm the primary maintainer of our cluster but I wish it were easier for other folks in the organization to do stuff. I kind of like this idea because it is somewhat analogous to apache's graceful shutdown. I suppose it could also be used for removing nodes cleanly - just don't restart the node. It isn't any better than using the cluster allocation api to push shards off the node but it would be more like other stuff, especially if it could be achieved with an |
Hi @nik9000 What's wrong with:
I realise you're trying to ensure that the cluster will be green even while the node is down, but is this really required? It's certainly a lot faster to go through the above process than to copy your shards around the cluster for every node. |
I want to be able to handle losing another node unexpectedly while I'm in the process of doing rolling restarts:
I do admit to having another motive to keeping the state green as much as possible: I don't want to excite my entire ops team just because a few indexes are taking a while copying over. Most folks monitor the health status and nagios will start ringing bells if it stays yellow for long. |
Hi @nik9000 We discussed this in FixItFriday today and there seem to be two issues:
The ability to reuse the replicas on the started node either requires all segments on that node to be the same as those on the primary, or sequence IDs (#6069). The addition of sequence IDs (which we are actively working on) will speed up recovery dramatically. This pretty much obviates the need for number 1 - the redundant replicas, as recovery will be much faster. I think, with that plan, we can close this issue in favour of sequence IDs. |
Last week I spent a few hours manually restarting nodes to upgrade to 0.90.7. Since we want to keep our configured level of redundancy at all non-emergency times I did it by using the shard allocation api to disallow shard from the node being shutdown, waiting until there were no indexes on the node, restarting it, then using the shard allocation api to allow shards back on the node, then waiting for the cluster to re-balance shards.
I wonder if this process could be automated beyond the bash/curl/awk mess that I've been using and if that might let the re-balance operation proceed more quickly. Would it be possible to:
I think this strikes a nice balance between the exclude._ip way of shutting down and the disable_allocation way of shutting down.
At some point it'd be nice if there were some kind of log that could be replayed against out of date replicas so they could recover even if changes had been made. I wonder if something could be synthesized from the _timestamp field....
The text was updated successfully, but these errors were encountered: