Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

telepresence --run-shell failure: 'Telepresence pod not found for Deployment' error #1021

Closed
christinacelee opened this issue Apr 30, 2019 · 4 comments
Labels
stale Issue is stale and will be closed

Comments

@christinacelee
Copy link

christinacelee commented Apr 30, 2019

What were you trying to do?

run telepresence --run-shell

What did you expect to happen?

(please tell us)

What happened instead?

Crashed with a 'Telepresence pod not found for Deployment' error
There is a telepresence deployment currently successfully running (has a pod) in minikube is: telepresence-1556574584-361106-21942
telepresence.log

Automatically included information

Command line: ['/usr/local/bin/telepresence', '--run-shell']
Version: 0.99
Python version: 3.7.1 (default, Dec 14 2018, 13:28:58) [Clang 4.0.1 (tags/RELEASE_401/final)]
kubectl version: Client Version: v1.14.0 // Server Version: v1.14.0
oc version: (error: [Errno 2] No such file or directory: 'oc': 'oc')
OS: Darwin Chungs-MacBook-Pro.local 18.2.0 Darwin Kernel Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64

Traceback (most recent call last):
  File "/usr/local/bin/telepresence/telepresence/cli.py", line 136, in crash_reporting
    yield
  File "/usr/local/bin/telepresence/telepresence/main.py", line 60, in main
    remote_info = start_proxy(runner)
  File "/usr/local/bin/telepresence/telepresence/proxy/__init__.py", line 95, in start_proxy
    run_id=run_id,
  File "/usr/local/bin/telepresence/telepresence/proxy/remote.py", line 211, in get_remote_info
    format(deployment_name)
RuntimeError: Telepresence pod not found for Deployment 'telepresence-1556636385-8781178-28511'.

Logs:

5] captured in 0.09 secs.
 117.4 TEL | [106] Capturing: kubectl --context minikube-development --namespace development get pod -o json --selector=telepresence=e1a60a098a4a428f816ee92fcaca647b
 117.5 TEL | [106] captured in 0.11 secs.
 118.5 TEL | [107] Capturing: kubectl --context minikube-development --namespace development get pod -o json --selector=telepresence=e1a60a098a4a428f816ee92fcaca647b
 118.6 TEL | [107] captured in 0.10 secs.
 119.6 TEL | [108] Capturing: kubectl --context minikube-development --namespace development get pod -o json --selector=telepresence=e1a60a098a4a428f816ee92fcaca647b
 119.7 TEL | [108] captured in 0.09 secs.
 120.7 TEL | [109] Capturing: kubectl --context minikube-development --namespace development get pod -o json --selector=telepresence=e1a60a098a4a428f816ee92fcaca647b
 120.8 TEL | [109] captured in 0.16 secs.
 120.8 TEL | END SPAN remote.py:151(get_remote_info)  119.5s
 120.9 TEL | [110] Running: sudo -n echo -n
 121.0 TEL | [110] ran in 0.04 secs.

@ark3
Copy link
Contributor

ark3 commented Apr 30, 2019

Sorry about the crash. It looks like Telepresence launched its proxy deployment but no pod came up. Is your cluster very tight on resources? Does this crash occur every time?

Can you try this again? If the crash happens, before you hit "no" on the crash reporter, in another window please try kubectl describe deployment -l telepresence and pass along the output (redacted as desired). Once you answer the crash reporter, everything gets cleaned up, so please run the command before doing that.

Thanks for your help.

@christinacelee
Copy link
Author

The machine minikube is being run on is running a bit low on RAM.. This crash has been happening every time since the first (and only) successful shell launch.

output for kubectl describe deployment -l telepresence

@ark3
Copy link
Contributor

ark3 commented Apr 30, 2019

Okay, that shows deployment tel...-21942, which I'm guessing is left over from a prior run (sorry, that's issue #260). The deployment from a different run would have a different label, e.g., tel...-28511 from your crash report above. Maybe deleting the older deployment might free up enough resources to run a new one?

@stale
Copy link

stale bot commented Mar 31, 2021

This issue has been automatically marked as stale because it has not had recent activity.
Issue Reporter: Is this still a problem? If not, please close this issue.
Developers: Do you need more information? Is this a duplicate? What's the next step?

@stale stale bot added the stale Issue is stale and will be closed label Mar 31, 2021
@stale stale bot closed this as completed Apr 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Issue is stale and will be closed
Projects
None yet
Development

No branches or pull requests

2 participants