Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure only a single action for a document is added to a batch #18469

Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions sdk/search/Azure.Search.Documents/src/Batching/Publisher.cs
Original file line number Diff line number Diff line change
Expand Up @@ -422,12 +422,14 @@ await SubmitBatchAsync(
// Returns whether the batch is now full.
bool FillBatchFromQueue(List<PublisherAction<T>> batch, Queue< PublisherAction<T>> queue)
{
// TODO: Consider tracking the keys in the batch and requiring
// them to be unique to avoid error alignment problems
HashSet<string> documentIdentifiers = new HashSet<string>(StringComparer.Ordinal);
heaths marked this conversation as resolved.
Show resolved Hide resolved

while (queue.Count > 0)
{
if (batch.Count < BatchActionCount)
// Stop filling the batch if we run into an action for a document that is already in the batch.
// We want to keep the actions ordered and map any errors accurately to the documents,
// so we need to split the batch on encountering an action for a previously queued document.
if ((batch.Count < BatchActionCount) && documentIdentifiers.Add(queue.Peek().Key))
heaths marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imagine you've got a sequence of pending actions like <(Upload A), (Upload B), (Merge A), (Upload C), (Merge B)>.

The code as implemented will create a batch of <(Upload A), (Upload B)> and leave <(Merge A), (Upload C), (Merge B)> pending for the next batch.

I was hoping we could instead create a batch of <(Upload A), (Upload B), (Upload C)> and leave <(Merge A), (Merge B)> pending for the next batch.

It would be a little more complicated because .NET's Queue isn't amenable to cutting in line. It won't be a breaking change to do this in the future though, so let's ship what you have here and consider fixing this as an optimization next release.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. I was under the (wrong) impression that the order cannot be mucked with. But it makes sense that only the order of operations for the same document need be respected.
Can there ever be a case that relies on the ordering of the whole set of operations? Like, (a completely made up example) - there's a hook on the service side that executes additional operations and relies on the relationship between the documents? So, (Upload C) becomes an issue if it happens before (Merge A)?

{
batch.Add(queue.Dequeue());
}
Expand Down
59 changes: 59 additions & 0 deletions sdk/search/Azure.Search.Documents/tests/Batching/BatchingTests.cs
Original file line number Diff line number Diff line change
Expand Up @@ -1031,6 +1031,65 @@ public async Task Behavior_BatchSize()
Assert.AreEqual(5, client.Submissions.Count);
}

[Test]
public async Task Behavior_SplitBatchByDocumentKey()
Mohit-Chakraborty marked this conversation as resolved.
Show resolved Hide resolved
{
int numberOfDocuments = 5;

await using SearchResources resources = await SearchResources.CreateWithEmptyIndexAsync<SimpleDocument>(this);
BatchingSearchClient client = GetBatchingSearchClient(resources);

SimpleDocument[] data = new SimpleDocument[numberOfDocuments];
for (int i = 0; i < numberOfDocuments; i++)
{
data[i] = new SimpleDocument() { Id = $"{i}", Name = $"Document #{i}" };
}

// 'SimpleDocument' has 'Id' set as its key field.
// Set the Ids of 2 documents in the group to be the same.
// We expect the batch to be split at this index, even though the size of the set is smaller than the batch size.
data[3].Id = data[0].Id;

await using SearchIndexingBufferedSender<SimpleDocument> indexer =
client.CreateIndexingBufferedSender(
new SearchIndexingBufferedSenderOptions<SimpleDocument>()
{
// Set the expected batch action count to be larger than the number of documents in the set.
InitialBatchActionCount = numberOfDocuments + 1,
});

// Throw from every handler
int sent = 0, completed = 0;
indexer.ActionSent += e =>
{
sent++;

// Batch will be split at the 4th document.
// So, 3 documents will be sent before any are submitted, but 3 submissions will be made before the last 2 are sent
Assert.AreEqual((sent <= 3) ? 0 : 3, completed);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels a little tricky. Would it be easier to have a List<string> pendingKeys collection that you add/remove from in these methods and then CollectionAssert.AllItemsAreUnique in each?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Let me see how I can make the checks simpler.


throw new InvalidOperationException("ActionSentAsync: Should not be seen!");
};

indexer.ActionCompleted += e =>
{
completed++;

// Batch will be split at the 4th document.
// So, 3 documents will be submitted after 3 are sent, and the last 2 submissions will be made after all 5 are sent
Assert.AreEqual((completed <= 3) ? 3 : 5, sent);

throw new InvalidOperationException("ActionCompletedAsync: Should not be seen!");
};

AssertNoFailures(indexer);
await indexer.UploadDocumentsAsync(data);
await indexer.FlushAsync();

Assert.AreEqual(5, sent);
Assert.AreEqual(5, completed);
}

[Test]
[TestCase(409)]
[TestCase(422)]
Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading