Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add core::task::yield_now #74335

Closed
wants to merge 1 commit into from
Closed

Add core::task::yield_now #74335

wants to merge 1 commit into from

Conversation

yoshuawuyts
Copy link
Member

Adds core::task::yield_now as a counterpart to std::thread::yield_now. Tracking issue is #74331.

cc/ @rust-lang/wg-async-foundations

Motivation

Schedulers generally infer when tasks should be run to optimize performance. However some of the time the programmer has more information than the scheduler, and may want to manually yield back to the scheduler to enable other code to run. For threads this mechanism is std::thread::yield_now.

This PR introduces a counterpart to thread::yield_now in the form of task::yield_now. The way this works is that the first time the future is polled it returns Poll::Pending and immediately calls Waker::wake. Poll::Pending signals to executors that the current task is not ready and will be suspended, while the call to Waker::wake then immediately puts it at the back of the execution queue.

This somewhat depends on behavior of executors; but in practice this is compatible with all executors in the ecosystem right now. Both async_std::task::yield_now and tokio::task::yield_now share similar implementations. With neither implementation being bound to any particular runtime, this is a good candidate to start trialing in nightly.

Implementation notes

Much like core::future::ready and core::future::pending this function returns a concrete future rather than being implemented as an async fn. This enables it to be inspected and stored inside structs, which is a useful property for types found in the stdlib.

We also make note of the concept of an "executor", which has precedent in the Wake trait.

References

@rust-highfive
Copy link
Collaborator

r? @cramertj

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jul 14, 2020
@taiki-e
Copy link
Member

taiki-e commented Jul 14, 2020

Note that the same feature request has been submitted to futures and has been closed: rust-lang/futures-rs#1430
See also smol-rs/smol#43 (comment)

@yoshuawuyts
Copy link
Member Author

@taiki-e Thanks for sharing both links. They're interesting reads I hadn't seen before. However I'm not sure to which degree they cover the reasoning I laid out in my opening post. Could you perhaps elaborate?

In particular I have questions about smol-rs/smol#43 (comment). The comment is constructed based off the following assumption:

Let us assume an executor tuned specially for throughput, without caring much about latency.

I'm not aware of any such executor in the ecosystem built on futures? Throughput-oriented runtimes such as rayon don't build on futures as a primitive and to my knowledge don't intend to either. In practice we know that async-std, tokio, and smol either expose this exact implementation of task::yield_now, or are built with the expectation for this to work. Earlier in the same thread @stjepang also comments:

Just to elaborate: if a scheduler doesn't handle yield_now().await correctly, it's broken anyway.

To me this sounds about accurate?


My hopes with this PR is to consolidate the multiple implementations we're currently seeing in the ecosystem into the stdlib. These implementations all do the exact same thing: wake the waker, and return Poll::Pending. Consensus on implementations and naming is is fairly rare in the ecosystem, so the fact that we have it here to me is a clear signal that it would be a good candidate to start trailing in nightly.

@taiki-e
Copy link
Member

taiki-e commented Jul 16, 2020

Just to elaborate: if a scheduler doesn't handle yield_now().await correctly, it's broken anyway.

To me this sounds about accurate?

It is not clear what "correctly" means here. Whether this means "looks like it's working, but it's actually a no-op" or "it behaves as the user expected" should depend on the executor. (see issue and pr linked in my previous comment)

This somewhat depends on behavior of executors; but in practice this is compatible with all executors in the ecosystem right now. Both async_std::task::yield_now and tokio::task::yield_now share similar implementations. With neither implementation being bound to any particular runtime, this is a good candidate to start trialing in nightly.

Note that these are implementation details. There is no guarantee in the future that those executors will be compatible with the current implementation of yield_now. Actually, the old async-std provided something different than the current implementation.

@withoutboats
Copy link
Contributor

withoutboats commented Jul 16, 2020

I don't see the issue. This construct yields once and wakes, how the executor its run in responds to that yielding is that executor's business. Maybe there needs to be a safety note about this, but it isn't a problem for the construct. The OS you're running on could just as easily no-op thread::yield_now.

Main question to me is whether or not this construct is useful.

@taiki-e
Copy link
Member

taiki-e commented Jul 16, 2020

I don't see the issue.

Oops, sorry! Those are rust-lang/futures-rs#1430 and smol-rs/smol#43 (comment) that linked in my previous comment.

@withoutboats
Copy link
Contributor

Sorry, I meant "I don't see the problem." This construct would have well defined semantics, its the executor's concern how it handles it and its fine if its different on different executors.

@ghost
Copy link

ghost commented Jul 16, 2020

@withoutboats On usefulness of yield_now(): https://brokenco.de/2020/07/15/high-throughput-in-rust.html

The “trick” to resolving the issues is seemingly as old as the cooperative multi-tasking world itself: throw some yields on it. In async-std this means task::yield_now().await;, which gives the async reactor a breather to let another task run.

That said, the number of times in my career where a simple yield addressed pesky performance issues is incalculable. 🙀

@nikomatsakis
Copy link
Contributor

So I think what @taiki-e is saying is something like this:

The intent of invoking task::yield_now().await is to allow other tasks to run, but it cannot directly control that. So it just yields and assumes that the scheduler is going to try and be "fair" and thus allow other tasks to run if they are waiting. However, it doesn't actually know that the scheduler is fair, and thus task::yield_now().await may not have the desired effect on all schedulers. For some schedulers, there might be some extra action that would be required in addition to simply yielding.

In other words, of course all executors will do "something" when yield_now().await is executed, but they might not do what the user wanted them to do.

That said, it'd be nice to have some more concrete counterexample. It seems like async-std used to have a different yield_now implementation, but from what I could tell the only difference was that it invoked some kind of "callback" like crate::rt::RUNTIME.yield_now(); (link), I'm not sure what that was about.

Otherwise, I imagine it would have to be a scheduler that was expecting some kind of "extra-protocol" around futures to help them specify how they'd like to be scheduled, and which behaved poorly with generic futures like yield_now -- but that seems like it's likely to have all kinds of problems.

I guess an example might be that you might have a runtime that has some special case check for "a future returned Pending, but its associated waker is already awoken, I won't return to the reactor or general scheduling loop, I'll just re-run the future immediately". This might be desirable in some situations, I don't know, but it would certainly work against the expected behavior of yield_now.

(More generally, I could imagine that if we start to see a need for more "scheduling hints" of this kind, it's precisely the kind of thing we could add by extending the Context type.)

Anyway I guess the TL;DR is that adding this construct kind of "locks in" (in a soft way...) the idea that when a future returns NotReady, the executor probably ought to go and schedule other things before picking it back up again. I think I feel ok about that, but I can see why it might make @taiki-e nervous.

(@taiki-e, is this a fair summary?)

@the8472
Copy link
Member

the8472 commented Jul 17, 2020

In other words, of course all executors will do "something" when yield_now().await is executed, but they might not do what the user wanted them to do.

Is that any different from thread::yield_now()? The OS scheduler may also immediately reschedule the current task. It's merely a hint that may have no effect. This has lead to some confusion and misuse before, e.g. #46774

See also Linus' rant on misuse of sched_yield() https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752

(More generally, I could imagine that if we start to see a need for more "scheduling hints" of this kind, it's precisely the kind of thing we could add by extending the Context type.)

If they are merely hints then they only serve to improve latency, which shouldn't be as relevant to some sort of throughput scheduler. If progress strictly requires another future to run then a hint is not sufficient and async-aware synchronization mechanisms should be used instead. Additional hints only make sense if you have tiered or priority-based scheduling or want to jump ahead of the queue (e.g. javascript's micro vs. macro tasks), but those would be scheduler-specific.

@withoutboats
Copy link
Contributor

I also agree that having no effect in some executors is a perfectly fine behavior for this code. Putting something as a method on Context makes it unavailable in async fns.

@nikomatsakis
Copy link
Contributor

Adding things to Context doesn't make them unavailable to async fn; it simply requires that they be "encapsulated" in some kind of future that is implemented directly.

What I was imagining is that one could imagine, in some future, that Context might be extended with a method like hint_prefer_others and then we might modify the yield_now future to not only return NotReady but also, first, invoke context.prefer_others().

In any case, my point is more to say "if this were a problem we cared to address, we have some means of doing so", I don't really see it as a problem now. It seems to me that yield_now does just what it says in any case -- it yields to the scheduler. What the scheduler does at that point is up to it.

@tmiasko
Copy link
Contributor

tmiasko commented Jul 18, 2020

One disadvantage of task yielding as implemented here is that it tends to yield execution only within a scope of the innermost executor, rather than the outermost, which arguably would be more useful.

In particular, I am referring to embeddable executors like FuturesUnordered, which use their own wakers, and their own scheduling queue which is polled until empty. When combined with yielding as implemented here, they won't yield processing to tasks outside, only to those within. If there are no other ready tasks in FuturesUnordered, yield_now has the opposite effect to the desired one. For additional discussion see rust-lang/futures-rs#2053. (I am referring to FuturesUnordered as it existed before it was equipped with an arbitrary polling limit).

An actual executor would be in position to provide a yielding primitive with stronger guarantees, working in conditions described above, e.g., by stashing the wakers for invocation on the next tick of the executor rather than doing it immediately.

I think that the documentation for yield_now should describe how the implementation works, mention that actual behaviour is executor specific, and yielding is advisory in nature.

@ghost
Copy link

ghost commented Jul 18, 2020

@nikomatsakis

It seems like async-std used to have a different yield_now implementation, but from what I could tell the only difference was that it invoked some kind of "callback" like crate::rt::RUNTIME.yield_now(); (link), I'm not sure what that was about.

That was a hack I did until we implemented yielding 'properly'. So the condition for 'proper' yielding is: Has the task been woken while it was being polled? If yes, then the task is yielding. For async-std, that means we put the task at the back of the task queue.

We didn't have an elegant way of doing this check before so I just hacked up a poor version of the yielding condition: Has async_std::task::yield_now() been called while the task was being polled?

@tmiasko

One disadvantage of task yielding as implemented here is that it tends to yield execution only within a scope of the innermost executor, rather than the outermost, which arguably would be more useful.

The solution seems straightforward to me: we need to add the check to FuturesUnordered because embeddable executors should respect yielding, too. So if a future added to FuturesUnordered is woken before it finishes polling, that is an indication that the whole FuturesUnordered should yield too.

@bors
Copy link
Contributor

bors commented Jul 19, 2020

☔ The latest upstream changes (presumably #74493) made this pull request unmergeable. Please resolve the merge conflicts.

@yoshuawuyts
Copy link
Member Author

Rebased to resolve the merge conflict, and addressed @jyn514's feedback.

@bors
Copy link
Contributor

bors commented Jul 28, 2020

☔ The latest upstream changes (presumably #73265) made this pull request unmergeable. Please resolve the merge conflicts.

@nikomatsakis
Copy link
Contributor

So I think that @stjepang's comment is interesting -- it seems to be mentioning an "implied" or "hidden" heuristic of "if the task was awoken during its poll, we should not re-awaken it immediately". This feels like it merits mention in the documentation of the Future trait as a kind of "suggestion" for executors. I guess that would in some sense address @tmiasko's concern in that the "implicit protocol" that is implied by yield_now becomes documented, no? (I guess one might be concerned that the protocol may not be the right one.)

@Muirrum Muirrum added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Aug 13, 2020
@JohnCSimon
Copy link
Member

@yoshuawuyts
Ping from triage - can you address the merge conflicts?

@jonas-schievink jonas-schievink added the T-libs-api Relevant to the library API team, which will review and decide on the PR/issue. label Sep 13, 2020
@yoshuawuyts
Copy link
Member Author

Updated to resolve merge conflicts!


it seems to be mentioning an "implied" or "hidden" heuristic of "if the task was awoken during its poll, we should not re-awaken it immediately". This feels like it merits mention in the documentation of the Future trait as a kind of "suggestion" for executors.

@nikomatsakis great point; will file a separate PR for that.

@crlf0710
Copy link
Member

crlf0710 commented Oct 8, 2020

Triage: It seems this is kind of blocked... I'm going to mark this as such and add corresponding wg labels.

@crlf0710 crlf0710 added S-blocked Status: Marked as blocked ❌ on something else such as an RFC or other implementation work. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Oct 8, 2020
@jyn514 jyn514 added the A-async-await Area: Async & Await label Oct 8, 2020
@yoshuawuyts
Copy link
Member Author

yoshuawuyts commented Oct 8, 2020

My impression so far is that this can be useful in some cases, but it doesn't seem like they come up very often.

That actually surprised me, but I think that may be because we have different backgrounds. I suspect the difference between async Rust and other languages is that cooperative scheduling in Rust is only required to enable other tasks to run intermittently, but not used as a tool to guarantee correctness. For example in Node.js process.nextTick is used to yield back to the event loop and force callbacks to always be async in order to guard against data races (ref). Because Rust has the borrow checker it doesn't have data races and doesn't need to use manual yields for that purpose.

Still though, that does still leave the use case open for manually yielding in order to break up computations. I suspect this may be most desired in systems that have specific time limits. For example in web browsers (the example I'm most comfortable with) no computation should ever exceed 8ms total, or else frames are dropped because the main thread doesn't get the opportunity to render. This makes it essential to be able to yield back to the event loop periodically.

Though taking a step back to this PR: I guess what we're indeed doing here is more deeply tied to the question: "How does cooperative scheduling work in async Rust?" which I guess we've only ever partially defined. I think this is a topic that probably warrants a blog post.

@the8472
Copy link
Member

the8472 commented Oct 8, 2020

Still though, that does still leave the use case open for manually yielding in order to break up computations. I suspect this may be most desired in systems that have specific time limits. For example in web browsers (the example I'm most comfortable with) no computation should ever exceed 8ms total, or else frames are dropped because the main thread doesn't get the opportunity to render. This makes it essential to be able to yield back to the event loop periodically.

This is may be true in a single-threaded environment. But in a multi-threaded environment the solutions usually look different. E.g. separate task pools (e.g. compute and IO) or tasks marked as background jobs which can only take up to a certain fraction of the available threads so that there remaining threads available for time-critical tasks.

Yielding seems more like a hack.

@yoshuawuyts
Copy link
Member Author

yoshuawuyts commented Oct 13, 2020

[...] in a multi-threaded environment the solutions usually look different. [...] separate task pools [...] or background jobs [...]

@the8472 Yes, I'm aware. async-std supports spinning off cpu-bound work into a separate pool, and recently progress has been made on assigning prioritization levels to tasks. Though similar functionality is available in JavaScript as well. RequestIdleCallback enables scheduling low-priority tasks, and WebWorkers enable spawning threads. With packages such as piscina providing pooling support on top of that.

And even then it's common to periodically yield back to the runtime. Spawning tasks is not free. Runtimes are different and it can be risky to make assumptions about the scheduling behavior of different runtimes. Especially in Rust, where a Future is different from a Task, it's useful to be able to yield back to the runtime from within a Future that does did not spawn the task it's running in.

Being able to voluntarily yield back to the executor is a basic capability for any system that use non-preemptive concurrency. Even a search for "cooperative scheduling" will yield back the following definition as the first result:

Cooperative multitasking, also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Instead, processes voluntarily yield control periodically or when idle or logically blocked in order to enable multiple applications to be run concurrently.

Wikipedia: Cooperative multitasking, emphasis mine.

I think it's important that we pay our dues and ensure that providing a way to yield back to the executor works correctly. So far the review process in this thread has been incredibly helpful in assuring that. But I don't know how else to say that we are in fact missing basic functionality here.

@yoshuawuyts
Copy link
Member Author

This came up again in Zulip today in the context of OS development.

@nikomatsakis
Copy link
Contributor

@yoshuawuyts

it seems to be mentioning an "implied" or "hidden" heuristic of "if the task was awoken during its poll, we should not re-awaken it immediately". This feels like it merits mention in the documentation of the Future trait as a kind of "suggestion" for executors.

@nikomatsakis great point; will file a separate PR for that.

Did you ever file that PR? I feel like actually this change belongs in this PR, because this PR would be the one establishing and committing to that convention.

@yoshuawuyts
Copy link
Member Author

Did you ever file that PR?

@nikomatsakis I did not; agree that that would make sense — though wondering if it may actually warrant an RFC since it specs out a part of Rust's Futures model?

@nikomatsakis
Copy link
Contributor

Yeah, that's a good question. I guess I would leave that up to @rust-lang/libs to decide as it feels like a libs matter to me.


// Most futures executors are implemented as a FIFO queue, so all this
// future does is re-schedule the future back to the end of the queue,
// giving room for other futures to progress.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most futures executors are implemented as a FIFO queue

Shouldn't this assumption be mentioned in the doc comments?

@the8472
Copy link
Member

the8472 commented Nov 13, 2020

This is modeled after thread::yield_now(), but I think that predates the std::hint module. Maybe it would make more sense to place it there to make it clear that this is a hint to the executor and does not guarantee that other tasks get to run (since that's up to the executor).

@cramertj
Copy link
Member

cramertj commented Dec 8, 2020

r? @sfackler

I've already voiced my opinion in the comments above, and I do think this is important to resolve. I've assigned @sfackler in the hopes that they can represent the libs team here.

I would also still love input from @carllerche or someone else familiar with the tokio executor internals to give input on how feasible it would be to add the above guarantee to tokio.

@rust-highfive rust-highfive assigned sfackler and unassigned cramertj Dec 8, 2020
@cramertj
Copy link
Member

cramertj commented Dec 10, 2020

I discussed this issue with @carllerche today. They're skeptical about the idea of adding yield_now. yield_now was previously available in Tokio, but has since been largely obsoleted by an alternative budget-based strategy for cooperative task yielding first implemented here and described in detail in this blog post. This strategy avoids the need for async fn writers to manually add yield points, and they're considering deprecating yield_now, as it's no longer useful. If yield_now were added to the standard library, they would instruct Tokio users not to use it.

This strategy could also be made to cooperate with intermediate task containers such as FuturesUnordered. Here's a sketch of what it might look like to integrate this approach in the standard library.

impl core::task::Context<'a> {
    fn from_waker_and_budget(waker: &'a Waker, budget: &'a mut usize) -> Self { ... }
    // consider deprecating `from_waker`
    fn has_budget(&self) -> bool {
        match self.budget {
            None | Some(x) if *x > 0 => true,
            Some(_) => false,
        }
    }
    fn spend_budget(&mut self) -> Poll<_> {
        if self.has_budget() {
            *self.budget.unwrap() -= 1;
            Poll::Ready(())
        } else {
            Poll::Pending
        }
    }
}

"Leaf futures" (those that wait on asynchronous work, including async locks, channels, IO objects, timers, etc.) would then be in charge of calling spend_budget, and would return Pending when the budget is exhausted. As can be seen in the PR implementing this in tokio, this would be a pretty easy addition (just a matter of sprinkling ready!(cx.spend_budget()); in the appropriate places) and would avoid the need for end users to modify their code. Users writing CPU-heavy work could similarly spend_budget every time through a loop to ensure that they yield to other tasks after a certain number of steps.

Personally, I think the best next step would be for interested folks to discuss higher-bandwidth, perhaps in the async foundations zulip channel and come up with a comprehensive design for task pre-emption which could then be written up as an RFC. I think there's risk to adding one-off functionality like yield_now without a comprehensive view of how we want fairness to work, both from the perspective of the user and from the perspective of the executor implementor.

P.S. As for the guarantee I suggested about executors running other ready tasks before the current one if poll is called during its execution, this may not be satisfied by the Tokio executor, which uses some optimizations that mean that the most recently ready task may be polled first, not at the end of some queue.

Edit: we'd also probably want a plain budget() -> &'a mut usize or perhaps cx.child_with_waker(...)/cx.child_with_budget(...) so that users could construct a child context which shared the budget but with a different waker or vice versa.

Comment on lines +7 to +8
/// Calling this function calls move the currently executing future to the back
/// of the execution queue, making room for other futures to execute. This is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling this function doesn’t do anything, awaiting the returned future does. The rest of this sentence sounds like a definitive guarantee whereas the comment in the code says "most executors" which is a much weaker.

This doc should probably be rephrased along the lines of:

Awaiting the future returned by this function yields control back to the executor, giving it an opportunity to run other tasks. Executors typically move the current task to the back of their execution queue.

@bors
Copy link
Contributor

bors commented Dec 11, 2020

☔ The latest upstream changes (presumably #79927) made this pull request unmergeable. Please resolve the merge conflicts.

Note that reviewers usually do not review pull requests until merge conflicts are resolved! Once you resolve the conflicts, you should change the labels applied by bors to indicate that your PR is ready for review. Post this as a comment to change the labels:

@rustbot modify labels: +S-waiting-on-review -S-waiting-on-author

@yoshuawuyts
Copy link
Member Author

@yoshuawuyts [...] it may actually warrant an RFC since it specs out a part of Rust's Futures model?

@cramertj I think the best next step would be [to] come up with a comprehensive design for task pre-emption which could then be written up as an RFC.

Good, that's what I wanted to confirm. The shared designs certainly are interesting, and I agree we should talk about this more. Going to close this issue for now then in favor of speccing out Futures pre-emption through the RFC process. Thanks everyone!

@yoshuawuyts yoshuawuyts deleted the task-yield-now branch December 11, 2020 12:32
@RobbieMcKinstry
Copy link

I would be very interested in the "comprehensive design for task pre-emption" discussion. This is really cool stuff, everyone! :)

@khuey
Copy link
Contributor

khuey commented Mar 5, 2021

Giving the runtime the opportunity to schedule other tasks was debated at length in this issue, but one thing I didn't see raised is the ability to use yield_now to dispose of the current task.

We have what is effectively a database where a lot of the internals are implemented with futures. We wrap everything we dispatch to the runtime in a small wrapper future struct that is connected to the specific query that triggered it. Each time the outermost future is polled, it checks to see if the query has been cancelled, and if it has, it returns Poll::Ready(()) without doing any additional work.

For us something like task::yield_now serves not just as an opportunity to schedule other tasks but also as an opportunity to effectively cancel the current task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-async-await Area: Async & Await S-blocked Status: Marked as blocked ❌ on something else such as an RFC or other implementation work. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.