-
-
Notifications
You must be signed in to change notification settings - Fork 513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
telepresence should cleanup/kill its own inactive an expired pods #920
Comments
Agreed! Sorry about that. See also #260 (comment). |
Hey no worries, it's a great idea and great project, just some corners to smooth out |
Out of curiosity, did those deployments get created using the (default) new deployment operation? Were any services left behind? I have an idea that I'd like to explore... |
I think devs just left these behind, closed their laptops and telepresence lost their connection to the cluster. Afterwards, when they opened their laptops again, connections expired, leaving the proxy hanging. |
We also have the issue of 'interrupted' telepresence connections leaving many dangling deployments & pods. Just wondering, is there an easy way (with For now, I use kubectl delete deployments -l telepresence but it might as well kill 'in-use' deployments |
|
I believe this is no longer an issue in Telepresence 2 since you can use the |
The uninstall command posted above just tries to install the traffic-manager again.
|
Hi All,
Telepresence currently has problems cleaning up pods and deployments after itself. Here's a snapshot after couple of days of work:
Not ideal.
The text was updated successfully, but these errors were encountered: