Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible improvement for orderly shutdown #4248

Closed
nik9000 opened this issue Nov 25, 2013 · 7 comments
Closed

Possible improvement for orderly shutdown #4248

nik9000 opened this issue Nov 25, 2013 · 7 comments
Labels

Comments

@nik9000
Copy link
Member

nik9000 commented Nov 25, 2013

Last week I spent a few hours manually restarting nodes to upgrade to 0.90.7. Since we want to keep our configured level of redundancy at all non-emergency times I did it by using the shard allocation api to disallow shard from the node being shutdown, waiting until there were no indexes on the node, restarting it, then using the shard allocation api to allow shards back on the node, then waiting for the cluster to re-balance shards.

I wonder if this process could be automated beyond the bash/curl/awk mess that I've been using and if that might let the re-balance operation proceed more quickly. Would it be possible to:

  1. Ask the cluster to prepare a node for shutdown.
  2. The cluster will not allocate replicas to that node until it has finished shutting down.
  3. The cluster allocates an extra copy of all replicas that that node is hosting.
  4. Once this is done the node goes through the normal shutdown process.
  5. When the node comes back up it should rejoin the cluster and announce that replicas that it sill holds. At this point I think you can let the standard startup and re-balance logic take hold. The replicas that didn't change will stay on the restarted node and be removed from the node to which they were recently replicated. Those that did change will be removed from the restarted node and cluster re-balancing will balance the shards again.

I think this strikes a nice balance between the exclude._ip way of shutting down and the disable_allocation way of shutting down.

At some point it'd be nice if there were some kind of log that could be replayed against out of date replicas so they could recover even if changes had been made. I wonder if something could be synthesized from the _timestamp field....

@roytmana
Copy link

See also #4043 please

@nik9000
Copy link
Member Author

nik9000 commented Nov 27, 2013

@roytmana #4043 looks like it is for cluster restarts rather than rolling restarts. Still a problem, but right now I'm more interested in rolling restarts:) Still, if there is something that makes sense in both cases then I'd love to do it.

@roytmana
Copy link

yes, but with a rolling restart you at least know why you go through the pain where with cluster restart (say for full upgrade) the pain is unjustified :-)

It would be nice if restarting a node or entire cluster on purpose were slightly different from what ES does on node crash and optimized for quick recovery and ease of use.

We develop but we do not manage production ES and I feel the "average" sys admins are honestly not ready for the touchiness and complexity of current operational support. they would need to spend days on the mailing lists and bag their heads against problems before they can support ES with confidence

@nik9000
Copy link
Member Author

nik9000 commented Nov 27, 2013

Yeah, I'm the primary maintainer of our cluster but I wish it were easier for other folks in the organization to do stuff. I kind of like this idea because it is somewhat analogous to apache's graceful shutdown.

I suppose it could also be used for removing nodes cleanly - just don't restart the node. It isn't any better than using the cluster allocation api to push shards off the node but it would be more like other stuff, especially if it could be achieved with an /etc/init.d/elasticsearch graceful.

@clintongormley
Copy link

Hi @nik9000

What's wrong with:

  • disable allocation
  • shut down a node
  • start up the node
  • reenable allocation

I realise you're trying to ensure that the cluster will be green even while the node is down, but is this really required? It's certainly a lot faster to go through the above process than to copy your shards around the cluster for every node.

@nik9000
Copy link
Member Author

nik9000 commented Nov 27, 2013

I want to be able to handle losing another node unexpectedly while I'm in the process of doing rolling restarts:

  1. If I do lose a node while while restarting with allocation off I have to remember to turn it on in what has suddenly become a very exciting situation. This becomes scary if it isn't me, but someone less familiar with Elasticsearch doing the restart. Doubly scary again if they are using some kind of maintenance script to automate execution across many nodes.
  2. If the node I lose happens to also contain a replica of the one of the indexes on a restarting node I now have two fewer replicas. This is bad because I (at least theoretically) have sized the number of replicas I need to handle my search traffic plus one redundant replica. Two redundant replicas is more than I can afford. I believe the window of opportunity is longer than the restart for indexes that must be restored to the restarted node.

I do admit to having another motive to keeping the state green as much as possible: I don't want to excite my entire ops team just because a few indexes are taking a while copying over. Most folks monitor the health status and nagios will start ringing bells if it stays yellow for long.

@clintongormley
Copy link

Hi @nik9000

We discussed this in FixItFriday today and there seem to be two issues:

  1. making redundant copies of the shards on the node being restarted
  2. reusing the replicas on the restarted node

The ability to reuse the replicas on the started node either requires all segments on that node to be the same as those on the primary, or sequence IDs (#6069). The addition of sequence IDs (which we are actively working on) will speed up recovery dramatically. This pretty much obviates the need for number 1 - the redundant replicas, as recovery will be much faster.

I think, with that plan, we can close this issue in favour of sequence IDs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants