You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @Mohit-Chakraborty. There hasn't been recent engagement on this pull request. If this is still an active work stream, please let us know by removing the no-recent-activity label. Otherwise, we'll close this out in 7 days.
Hi @Mohit-Chakraborty. There was a mistake and this issue was unintentionally flagged as a stale pull request. The label has been removed and the issue will remain active; no action is needed on your part. Apologies for the inconvenience.
When submitting documents for indexing, we split batches on unique key of the documents, so that any exception from the service side can be correctly mapped and our document submission retry mechanism is stable. When designing a solution, we should aim to -
Try to fill the entire batch
Maintain the order of document submission
For .NET, we modified the batching algorithm via #18469
This helps with 2 above (maintain the order of document submission), but we can do better regarding 1 (try to fill the batch to the maximum extent).
I tried another round of improvement with #18603, but there were concerns about semantic change to the operation.
The change is adding a “flush duplicate actions immediately after the batch, regardless of size” behavior that might result in us sending a lot of extra batches. Ideally we’d just update the existing pending/retry queues and keep the rest of the logic the same (since it’s already so much to wrap your head around). That’s probably a nontrivial ask with .NET’s Queue though. I think we might need to switch from Queue to something else.
The remaining work is to modify the batching algorithm further, so that the above concerns are alleviated.
No description provided.
The text was updated successfully, but these errors were encountered: