-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle spawned child processes #152
Comments
Hi :) My guess is that we are launching k6 subprocesses but in at least some cases not waiting for them. I'll try to come up with a test-case. |
Hey! Thanks for responding 😄 Forgot to add some more context. We almost exclusively use "wait_for_results=false", since it suits our needs best. |
Ah, that would explain it since this is the one non-error case where we don't wait the process. I guess we need to wait for them on a global level. I probably can no longer work on this today but perhaps I'll find some time next week :) |
Hey @zerok! Sorry for the late reply. Thanks for solving our issue🙂 |
No worries and thanks for testing 😄 |
Hey
We've encountered some issues running the flagger-k6-webhook deployment in a Kubernetes cluster.
After some time, the deployment has a lot of zombie processes, causing new forks/test executions to fail.
We've currently mitigated the issue by restarting the deployment, but it's not a long term solution for us.
We are also using the latest version of this project.
Some suggestions, after looking at other projects, is to use tini or dumb-init as an entrypoint, in order to reap the zombie processes.
Here's a ps output from the container running flagger-k6-webhook, after it has been running for 6 hours.
The text was updated successfully, but these errors were encountered: