Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forcemerge operation failing to include second digit #36616

Closed
danielkasen opened this issue Dec 13, 2018 · 4 comments
Closed

Forcemerge operation failing to include second digit #36616

danielkasen opened this issue Dec 13, 2018 · 4 comments
Labels
:Distributed/Engine Anything around managing Lucene and the Translog in an open shard. Team:Distributed Meta label for distributed team

Comments

@danielkasen
Copy link

Elasticsearch version (bin/elasticsearch --version): 6.5.1

Description of the problem including expected versus actual behavior:

When executing a forcemerge operation it would appear that the check to see if it should proceed to the max_num_segments works, but the actual FM operation only includes the first digit. For example if you were to issue the cmd:

POST nginx-2018.12.05.14.00-1/_forcemerge?max_num_segments=20

It would forcemerge each shard to 2 segments opposed to the defined 20. Obviously this causes a large amount of resource utilization. I have tested this against several indices and all show the same result.

Now if the index already had only 20 segments then this would return immediately. So I believe it is in the execution step.

Steps to reproduce:

  1. Create an index
  2. Get it to create 30 segments per shard
  3. Try to forcemerge the index down to 25 segments and it should actually reduce it to 2 per shard
@danielkasen
Copy link
Author

Actually I think I am misreading these numbers. It would appear that it always tries to force merge down to 1. Did we remove the option to pass a number to this in the 6.5 release and not document it? This is causing a serious strain on our cluster.

@spinscale spinscale added the :Search/Search Search-related issues that do not fall into other categories label Dec 14, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search

@rjernst rjernst added the Team:Search Meta label for search team label May 4, 2020
@javanna javanna added :Distributed/Engine Anything around managing Lucene and the Translog in an open shard. and removed :Search/Search Search-related issues that do not fall into other categories Team:Search Meta label for search team labels May 3, 2023
@elasticsearchmachine elasticsearchmachine added the Team:Distributed Meta label for distributed team label May 3, 2023
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

@javanna
Copy link
Member

javanna commented May 3, 2023

It would appear that it always tries to force merge down to 1. Did we remove the option to pass a number to this in the 6.5 release and not document it? This is causing a serious strain on our cluster.

Yes, that is the case, sorry for the very late reply, see #31689 .

@javanna javanna closed this as not planned Won't fix, can't repro, duplicate, stale May 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Engine Anything around managing Lucene and the Translog in an open shard. Team:Distributed Meta label for distributed team
Projects
None yet
Development

No branches or pull requests

6 participants