Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JupyterLab keeps polling the API even when it detects that the server stopped running #15211

Open
lahwaacz opened this issue Oct 3, 2023 · 5 comments
Labels

Comments

@lahwaacz
Copy link

lahwaacz commented Oct 3, 2023

Description

When JupyterLab gets disconnected from the server and the server is stopped (e.g. by jupyterhub-idle-culler on JupyterHub due to inactivity), JupyterLab can detect this case and shows the following to the user:

screenshot-2023-10-03@09:27:15

However, JupyterLab keeps polling the server which results in many log messages on JupyterHub, such as:

[I 2023-10-03 09:26:11.545 JupyterHub log:191] 302 GET /jupyter/user/<user>/api/sessions?1696318001631 -> /jupyter/hub/user/<user>/api/sessions?1696318001631 (@<ip>) 0.52ms
[W 2023-10-03 09:26:11.842 JupyterHub base:1444] Failing suspected API request to not-running server: /jupyter/hub/user/<user>/api/sessions
[W 2023-10-03 09:26:11.842 JupyterHub log:191] 424 GET /jupyter/hub/user/<user>/api/sessions?1696318001631 (<user>@<ip>) 2.65ms

and:

[I 2023-10-03 09:27:06.172 JupyterHub log:191] 302 GET /jupyter/user/<user>/api/kernels?1696318026148 -> /jupyter/hub/user/<user>/api/kernels?1696318026148 (@<ip>) 0.57ms
[W 2023-10-03 09:27:06.432 JupyterHub base:1444] Failing suspected API request to not-running server: /jupyter/hub/user/<user>/api/kernels
[W 2023-10-03 09:27:06.432 JupyterHub log:191] 424 GET /jupyter/hub/user/<user>/api/kernels?1696318026148 (<user>@<ip>) 2.44ms

These are repeated every 30 seconds and everything is multiplied by the number of users that have an open JupyterLab session but stopped server.

Note that my JupyterHub runs on sub.domain.tld/jupyter so all endpoints in the logs are prefixed with /jupyter.

References: jupyterhub/the-littlest-jupyterhub#427, #3929

Reproduce

  1. Start a JupyterLab session on JupyterHub
  2. Go to File > Hub Control Panel, which opens a new browser tab, and click on Stop My Server
  3. The Server unavailable or unreachable message will be shown on the first tab
  4. Observe the logs on JupyterHub

Expected behavior

When JupyterLab gets 424 status for these requests, it means that the hub is reachable (i.e. there is no general problem with network connection), but the user server was stopped. So JupyterLab should stop polling, because there is no point! Just let the user click the Restart button or add a Retry button if there is a possibility that the lost session/kernel may become alive again.

It also seems that JupyterLab polls an outdated endpoint /user/<user>/api, which causes many useless 302 redirects. If backward compatibility is needed, JupyterLab should remember the target of the 302 response and use it in all subsequent requests. Maybe JupyterHub should return 301 Moved Permanently instead of 302 Found.

Context

  • Operating System and version: Arch Linux
  • Browser and version: Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/118.0
  • JupyterLab version: 4.0.6

All Jupyter packages (I run JupyterHub on Arch Linux so the package names and versions are most likely different from pip or other package managers):

jupyter-bash_kernel 0.9.0-2
jupyter-gnuplot_kernel 0.4.1-6
jupyter-lsp 2.2.0-2
jupyter-metakernel 0.30.1-1
jupyter-nbclient 0.8.0-1
jupyter-nbconvert 7.8.0-1
jupyter-nbformat 5.9.2-1
jupyter-nbgitpuller 1.2.0-2
jupyter-nbgrader-git v0.9.1.r1.g2ef44515-1
jupyter-notebook 7.0.4-1
jupyter-notebook-shim 0.2.3-2
jupyter-server 2.7.3-1
jupyterhub 4.0.2-1
jupyterhub-idle-culler 1.2.1-2
jupyterhub-systemdspawner 1.0.1-1
jupyterlab 4.0.6-1
jupyterlab-pygments 0.2.2-1
jupyterlab-pytutor 0.2.0-2
jupyterlab-rise 0.41.0-1
python-jupyter-client 8.3.1-1
python-jupyter-core 5.3.2-1
python-jupyter-events 0.7.0-1
python-jupyter-server-terminals 0.4.4-4
python-jupyter_telemetry 0.1.0-2
python-jupyterlab-server 2.25.0-1
@lahwaacz lahwaacz added the bug label Oct 3, 2023
@welcome
Copy link

welcome bot commented Oct 3, 2023

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@JasonWeill
Copy link
Contributor

This has been reported before, but the issue is closed: #11134 (thanks @krassowski ).

Let's keep this issue open for pull requests. Thanks @lahwaacz for your contribution!

@JasonWeill JasonWeill removed the status:Needs Triage Applied to new issues that need triage label Oct 3, 2023
@lahwaacz
Copy link
Author

lahwaacz commented Oct 3, 2023

@JasonWeill This is different from #11134 - false pop-ups are not the problem here, this issue is about JupyterLab polling even when the server is rightfully down.

@krassowski
Copy link
Member

Yes, we noted this difference during the triage meeting, but wanted to link back to the previous one because it contains useful solutions for folks who may end up finding this issue due to false pop-ups. In either case, pull requests welcome!

@Zsailer
Copy link
Member

Zsailer commented May 28, 2024

Another scenario to mention here...

In systems where the user's authentication expires, JupyterLab keeps polling endpoints and silently fails with 403 messages. There are two issues with this:

  1. There is no default pop-up to inform the user that none of their requests are going through.
  2. There is no way to generic way to tell JupyterLab and all plugins to stop polling the server.

We see this scenario often, since folks leave browser tabs open over a weekend, letting their cookie expire. When they return on Monday, they are confused why JupyterLab looks connected but they aren't able to do anything.

While we have implemented a custom plugin to raise a modal when things look expired, we cannot easily stop all polling across the client. Plugins are usually the issue.

Calling app.serviceManager.dispose() will stop all polling from core APIs, but it won't stop plugins that might be polling.

Would it make sense to provide a "PollingManager" mechanism where plugins can "register" any pollers they define and allow this manager to stop/start them when things like e.g. auth expiration happen?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants