Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use updated onPreAuth from Platform #71552
Use updated onPreAuth from Platform #71552
Changes from 1 commit
6b42af7
4a74961
8174114
c75bd17
7cc7f2d
aba7a45
b8288ae
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can handle this any number of ways. I just chose something that allowed the logic to be defined outside the route handler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How should this be configurable? Do we expose as a config flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++ to having it as a config option, it fits nicely under
xpack.ingestManager.fleet
the default value of
250
seems low to me, but I'm not up to date on our load testing resultsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This indeed seems low, how did you get to that number?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolving this since we've switched to 0 (off) by default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I test this handler? If so, can someone point me in the right direction :) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for testing, maybe looking at
routes/package_config/handlers.test.ts
would be helpful? it has usage ofhttpServerMock.createKibanaRequest
for creating a request object, that method has options for creating it with tags, etc.then I think you could mock
LIMITED_CONCURRENCY_MAX_REQUESTS
to 1 and mocktoolkit.next()
to resolve after waiting for a second. then kicking off multiple calls to the handler inside aPromise.all
should return an array of of responses whose status codes are[200, 503, 503, ...]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nice to name this something Fleet or agent-related to emphasize that this handler is for those routes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how can we decrease the counter if/when the bug is fixed? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was some discussion about this starting in #70495 (comment)
It's a "bug" but there is a test to ensure it works. There's also a possibility Platform will add a
request.events.completed$
which we can use insteadThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is entirely pedantic, it's not really a bug as much as an "implementation detail" per #70495 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kobelb agreed. I updated the comment in https://github.com/elastic/kibana/pull/71552/files/8174114aec75a3e8a528fdc8dcc04e1c75adff53#diff-ce1cb1cde75c2cf4e5135e1280d24a9dR64-R65
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the comment says this decrements the counter when a request completes. However,
/checkin
is using long-polling so I don't think the request "completes" in the same way.As @roncohen mentions in (3) in #71552 (comment), I think the counter will increment with each
/checkin
but not decrement.It seems like we could revert to using
core.http.registerOnPreResponse
to decrement during a checkin response , but the reason we moved away from that was it missed aborted/failed connections #70495 (comment) and as the first line saysrequests.events.aborted$ has a bug where it's fired even when the request completes
so that will result in double-decrementing.Perhaps we could do some checking in both the response and aborted handler to see if theres a 429/503 like the initial POC https://github.com/elastic/kibana/pull/70495/files#diff-f3ee51f84520a4eb03ca041ff4c3d9a2R182
cc @kobelb @nchaulet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jfsiii I think we need to do the decrementing inside the check-in route when the actual work of checking in the agent has completed and the request transitions to long polling mode. Could the interceptor augment the context passed to the route such that the route can decrement when it sees fit? And then we need to ensure that a given request can only be decremented once, so it doesn't get decremented again when the response finally gets send back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @roncohen here, I think the behavior is to "protect from multiple connection" until enough agents have transitioned to long polling where their usage become low.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe that it can at the moment... I think we have two options here
WeakSet
. @joshdover, is there some other primitive or concept within the Kibana platform that we could use to fulfill this behavior?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that is exposed currently. I am working on adding support for an
id
property onKibanaRequest
that would be stable across lifecycle methods (#71019), but that will not make it for 7.9. I'm also not sure it would be safe to use for this use-case because theid
could come from a proxy that is sending anX-Opaque-Id
header which we can't guarantee will be unique? Maybe we need an internal ID that is unique that we can rely on?We do keep a reference to the original Hapi request on the
KibanaRequest
object but it's been very purposefully hidden so that plugins can't abuse it.