Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add http/1 response streaming to Canvas batches #32027

Merged
merged 5 commits into from
Mar 1, 2019

Conversation

chrisdavies
Copy link
Contributor

@chrisdavies chrisdavies commented Feb 26, 2019

This adds response streaming to Canvas batch requests. The response streaming logic itself is in a standalone UI folder so that it can be reused elsewhere (the plan is to use it in a courier replacement.

This allows canvas batches to process batch responses as they come in, rather than waiting for the entire batch to complete, and should theoretically make the UI feel more responsive. The main purpose of this work is to facilitate a courier replacement effort that is in progress. So, if we don't want to add this to Canvas itself, we can simply use this as a reference PR for that courier work, and not merge this.

The branch name (kfetch/streaming) is misleading, as this doesn't modify kfetch. I started out implementing this as a kfetch modification, but it turned out to be the wrong approach, and a conversation with the kfetch implementors suggested this would be better as a standalone piece.

@elasticmachine
Copy link
Contributor

💔 Build Failed

@chrisdavies
Copy link
Contributor Author

I wonder, for large payloads, if the frequent reading of the response text will cause this to be slower / much worse than not streaming in the first place. It'd be nice to put this through some perf tests.

@elasticmachine
Copy link
Contributor

💚 Build Succeeded

@chrisdavies chrisdavies added Feature:ExpressionLanguage Interpreter expression language (aka canvas pipeline) v7.2.0 review labels Feb 27, 2019
@elasticmachine
Copy link
Contributor

💚 Build Succeeded

@w33ble
Copy link
Contributor

w33ble commented Feb 28, 2019

I wonder, for large payloads, if the frequent reading of the response text will cause this to be slower / much worse than not streaming in the first place. It'd be nice to put this through some perf tests.

Sample size of one, with a single workpad, so take this with a grain of salt, but this is what I see:

Branch Load time Refresh time Requests to fns
pull/31958 ~ 4 seconds ~3 seconds 4
This PR (rebased on pull/31958) ~ 4 seconds ~3 seconds 4

Timing is just manual using the stopwatch on my phone with devtools closed

So in practice, this doesn't really seem to change much of anything, but at the very least, it doesn't seem to slow things down. And I agree that having real perf checks would be great.

Copy link
Contributor

@w33ble w33ble left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good, functionality looks good. I thought I found a bug in the batching, but when I wrote some tests locally I was wrong. All in all, this LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:ExpressionLanguage Interpreter expression language (aka canvas pipeline) review v7.2.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants