Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Give run_in_worker_thread a semaphore= argument #57

Closed
njsmith opened this issue Feb 21, 2017 · 3 comments
Closed

Give run_in_worker_thread a semaphore= argument #57

njsmith opened this issue Feb 21, 2017 · 3 comments

Comments

@njsmith
Copy link
Member

njsmith commented Feb 21, 2017

If you want to limit the number of simultaneous threads, a semaphore is good for that. But right now it doesn't quite work with run_in_worker_thread(..., cancellable=True) because you lose track of the thread once it's cancelled. The idea here would be that if cancelled then run_in_worker_thread returns immediately... but the semaphore doesn't get released until the thread actual exits.

@njsmith
Copy link
Member Author

njsmith commented May 24, 2017

Might make more sense to have it accept an arbitrary async context manager that we enter on the way in and exit on the way out. (Example use case: want to acquire two semaphores, to enforce both a global and local limit like in #6 (comment).) Or could even accept a tuple of async context managers, just like how with accepts multiple comma-separated context managers.with= would be a good kwarg name, but unfortunately is a syntax error :-). Maybe limiter=?

@njsmith
Copy link
Member Author

njsmith commented May 28, 2017

Instead of supporing arbitrary async context managers, I think we should support arbitrary objects that have await obj.acquire() and obj.release(). The advantages are (1) the context manager exit protocol is quite complicated when it comes to exception handling and we don't need any of that here, (2) this makes the release a synchronous operation, which turns out to be very useful because the return-to-trio code runs in synchronous context and it seems silly to spawn a whole Task just to call some synchronous __aexit__ methods.

And let's say it takes values None, a semaphore, or a tuple of semaphores; None means to use the default semaphore and if you want completely unconstrained thread spawning you should explicitly pass an empty tuple.

(It also occurred to me that it might make sense to use a threading.Semaphore rather than trio synchronization primitives, since we have a thread here anyway, and it would simplify exception handling during cleanup. But this is a bad idea, because it would mean spawning a thread and then waiting for the semaphore, while the whole idea here is that we want to limit the number of threads that are alive at any given time.)

njsmith added a commit to njsmith/trio that referenced this issue Jun 18, 2017
- New synchronization primitive: CapacityLimiter. Like a Semaphore but
  more specialized. See python-triogh-182 for rationale.

- Add limiter= argument to run_in_worker_thread, that allows one to
  correctly (modulo
  python-trio#6 (comment))
  control the number of active threads.

- Added new function current_default_worker_thread_limiter(), which
  creates or returns a run-local CapacityLimiter, and made
  run_in_worker_thread use it by default when no other limiter= is
  given.

Closes: python-triogh-10, python-triogh-57, python-triogh-156
njsmith added a commit to njsmith/trio that referenced this issue Jun 18, 2017
- New synchronization primitive: CapacityLimiter. Like a Semaphore but
  more specialized. See python-triogh-182 for rationale.

- Add limiter= argument to run_in_worker_thread, that allows one to
  correctly (modulo
  python-trio#6 (comment))
  control the number of active threads.

- Added new function current_default_worker_thread_limiter(), which
  creates or returns a run-local CapacityLimiter, and made
  run_in_worker_thread use it by default when no other limiter= is
  given.

Closes: python-triogh-10, python-triogh-57, python-triogh-156
@njsmith
Copy link
Member Author

njsmith commented Aug 8, 2017

This got fixed in #181, not sure why GH didn't close it.

@njsmith njsmith closed this as completed Aug 8, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant