Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make autoimport cache generation non-blocking #499

Merged

Conversation

tkrabel
Copy link
Contributor

@tkrabel tkrabel commented Dec 19, 2023

On a medium-sized virtual environment, rope takes 10-20 sec to create the module cache. During this time, the language server is unresponsive.

This is not a problem on a local machine where the cache is unlikely emptied, but poses a challenge in cloud environments such as Databricks, where the server runs on a cloud VM which gets pre-emptied regularly.

This PR makes cache initialization non-blocking, hence making the server response immediately after receiving an initialized message.

How is this tested?

  • unit tests
  • e2e tests - making sure that the LSP server becomes responsive immediately, and rope working after the cache is created.

@tkrabel
Copy link
Contributor Author

tkrabel commented Dec 20, 2023

@ccordoba12 before diving further into this, I wanted to run the general idea past you.

What is the problem?

  • Currently, m_initialize is blocked for 10-20 seconds on rope cache creation.
  • While rope creates the cache, it bombards the client with 10k $/progress messages before it sends the ack. This overwhelms the language client, taking it another 40s until it becomes responsive and starts responding to the ack with an initialized
  • If I make the cache creation non-blocking, but keep sending progress reports at that rate, the client is still overwhelmed

That's why I suggest the following solution

  1. we definitely need to reduce the progress reporting. I deactivated it currently by commenting out the line containing PylspTaskHandle, but I'm happy to change that to throttle progress reporting.
  2. I made the cache creation non-blocking by running it in another thread, and only sending back completion or code action suggestions back once the thread is done.

CC @bagel897 since she worked on rope a lot.

Any thoughts folks?

@bagel897
Copy link
Contributor

Is the database consistent during generation?
You could batch progress reporting or do it per package instead of per module. Regardless cache generation can occupy 100% cpu. You may wanna implement a parallelism limiter.

@ccordoba12
Copy link
Member

we definitely need to reduce the progress reporting. I deactivated it currently by commenting out the line containing PylspTaskHandle, but I'm happy to change that to throttle progress reporting.

I'd be in favor of using throttling for this, if possible, to let users know what's happening with autoimport in the meantime.

I made the cache creation non-blocking by running it in another thread, and only sending back completion or code action suggestions back once the thread is done.

I think this is the right thing to do to avoid blocking the server on m_initialize.

@tkrabel tkrabel marked this pull request as ready for review December 21, 2023 17:12
@tkrabel
Copy link
Contributor Author

tkrabel commented Dec 21, 2023

@ccordoba12 the PR is ready for review. I throttled the progress reporting to 500ms. Progress begin and progress end messages are unaffected by it. I think this is more than enough.

I like how responsive the language server is now :)

Copy link
Member

@ccordoba12 ccordoba12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tkrabel for your work on this! Two small comments for you, the rest looks good to me.

pylsp/plugins/_rope_task_handle.py Outdated Show resolved Hide resolved
pylsp/plugins/rope_autoimport.py Outdated Show resolved Hide resolved
@ccordoba12 ccordoba12 added this to the v1.10.0 milestone Dec 21, 2023
@ccordoba12 ccordoba12 added the enhancement New feature or request label Dec 21, 2023
Copy link
Member

@ccordoba12 ccordoba12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last comment fo you @tkrabel then this should be ready.

pylsp/_utils.py Outdated
Comment on lines 13 to 14
import time
from functools import wraps
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move these two imports to the standard library section above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering: can't we automate that somehow? :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isort maybe, but I'm not really sure.

@ccordoba12 ccordoba12 changed the title [Rope] Make cache generation non-blocking Make autoimport cache generation non-blocking Dec 22, 2023
Copy link
Member

@ccordoba12 ccordoba12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me now, thanks @tkrabel!

@ccordoba12 ccordoba12 merged commit de87a80 into python-lsp:develop Dec 22, 2023
10 checks passed
@ccordoba12
Copy link
Member

I'll release 1.10.0 at the beginning of next week with your and @smacke's improvements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants