Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: add futures to libcore #2395

Closed
wants to merge 11 commits into from
Closed

RFC: add futures to libcore #2395

wants to merge 11 commits into from

Conversation

aturon
Copy link
Member

@aturon aturon commented Apr 6, 2018

⚠️⚠️ This RFC is superseded by #2418 ⚠️⚠️


This RFC proposes to add futures to libcore, in order to support the first-class async/await syntax proposed in a companion RFC. To start with, we add the smallest fragment of the futures-rs library required, but we anticipate follow-up RFCs ultimately bringing most of the library into libcore (to provide a complete complement of APIs).

The proposed APIs are based on the futures crate, but with two major changes:

  • The use of pinned types to enable borrowing within futures.
  • Removing the associated Error type (and adjusting combinators accordingly), in favor of just using Item = Result<T, E> instead. The RFC includes an extension trait to provide conveniences for Result-producing futures as well.

Rendered

@aturon aturon added the T-libs-api Relevant to the library API team, which will review and decide on the RFC. label Apr 6, 2018
}

/// Check whether this error is the `shutdown` error.
pub fn is_shutdown() -> bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing a &self receiver

```rust
pub trait Future {
/// The type of value produced on completion.
type Item;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were discussing this last week: I believe Output is more in line here with existing conventions (e.g. Fn traits, operator overloading traits), and Item would be more appropriate for Stream (by association with Iterator).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Outcome. Just kidding

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason to use Item, from my perspective, is that it is 2 characters shorter than Output. For Fn traits this doesn't matter, they have -> syntax, but Future, like Iterator, will often be bound Future<Item = ?>.

I don't have a strong opinion about this choice.

Copy link
Contributor

@Centril Centril Apr 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@withoutboats We could do some sort of Future() -> R deal for all traits of the form:

trait TraitName {
    type Output;
    // other stuff does not matter...
}

This might have a knock on benefit for people who define their own "Fn-like" traits.

Personally I think 2 more characters for a more apt name is the right way to go in this instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the Fn => Future, Iterator => Stream correspondence, and I think using Output will avoid confusion in places where futures/streams are being mixed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@withoutboats I am in strong support of keeping it Item for the reason you mentioned. As someone who works w/ futures all day, I type Item constantly. The shorter name makes a big difference here.

The fn analogy doesn't make sense. a future is not a function. It is a value that will complete in the future.

Item matches Iterator which is closer to what a future is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fn analogy doesn't make sense. a future is not a function. It is a value that will complete in the future.

The analogy does make sense: both a Fn and a Future produce a single Output. Both an Iterator and a Stream produce multiple Items

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make my position a bit clearer: I think Item made more sense when we had something like Result<Self::Item, Self::Error>, since it was "the item in the result".

Copy link
Member

@cramertj cramertj Apr 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@withoutboats

For Fn traits this doesn't matter, they have -> syntax, but Future, like Iterator, will often be bound Future<Item = ?>.

If we wanted to be really wild, we could make -> sugar work for the Future trait:

fn foo() -> impl Future -> u32 {
    println!("Foo called");
    async {
        println!("Foo polled");
        5
    }
}

fn serve(f: impl FnMut(Request) -> impl Future -> Response) {
    // serve http requests by calling `f`
    ....
}

/// of data on a socket, then the task is recorded so that when data arrives,
/// it is woken up (via `cx.waker()`). Once a task has been woken up,
/// it should attempt to `poll` the future again, which may or may not
/// produce a final value at that time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to have it made explicit that it's not an error to poll a future even before a registered 'interest' has caused a task wakeup.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the use case for polling a future multiple times like that?

(Intuitively, I would expect that assuming that it doesn't happen allows for some implementation simplification, e.g. one doesn't need to guard against multiple wakeups being scheduled for a single I/O event.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main one I can think of is implementing select and join, you don’t know which of your sub-futures caused the wakeup so you have to poll them all to see if any are ready yet.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would think that such O(N) "poll all the inner futures" behaviour would be considered problematic. Hasn't there been some work to help a future figure out the cause of a wakeup?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO select and join should only be used for a very small number of futures, e.g. selecting a single future with a timeout to cancel it. If you have many sub-parts then these should be spawned off to separate tasks (and so be woken separately by the Executor) and a more efficient way to notify when all/one is complete should be used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As noted, select and join are the ones that spring to mind. My main goal is to make it very clear whether or not it's permitted (and the recommended way of doing select if not - I can think of ways, but they're not great) as this could cause a nasty ecosystem split if some authors make the simplifying assumptions @HadrienG2 mentions and others require otherwise.

@aturon

This comment has been minimized.

@sbstp
Copy link

sbstp commented Apr 6, 2018

Depending on how likely SpawnErrors are to occur, it might be nice to have a provided alternative to spawn that simply panics, because a lot of users might end up simply calling .unwrap() on the result.

I guess that beyond the trait, a particular executor could provide this method if it does not have a limit on the number of tasks and such.

where E: BoxExecutor;

/// Get the `Waker` associated with the current task.
pub fn waker(&self) -> &Waker
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing ;


// this impl is in `std` only:
impl From<Box<dyn Future<Item = ()> + Send>> for Task {
pub fn from_box(task: ) -> Task;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing argument type? or maybe an indication that the argument type isn't interesting (aka _) ?

@aturon
Copy link
Member Author

aturon commented Apr 6, 2018

@sbstp Note, such a convenience is provided by the Context::spawn method.


impl<'a> Context<'a> {
/// Note: this signature is future-proofed for `E: Executor` later.
pub fn new<E>(waker: &'a Waker, executor: &'a mut E) -> Context<'a>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how this is the case, if this is stabilised with argument that's going to change, how is that future proof? is it because Executor will be a supertype/parent of BoxExecutor, thus the type is only being opened up to things it couldn't have been before?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along those lines, yes -- because there will be a blanket impl from BoxExecutor to Executor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, I was confused on my first read-thru, but realising that it's an argument that will effectively be a subtype of the eventual trait.

@aturon
Copy link
Member Author

aturon commented Apr 6, 2018

One note concerning vetting and stabilization, which is very important: the basic shape of the futures API changes (regarding the task system) have been vetted for a couple of months on 0.2 preview releases, and integrations. The pinning APIs have been quasi-formally checked. And the proposed stabilization plan here will give us an additional 6-7.5 months of experience with these APIs before they are shipped on stable.


/// Represents that a value is not ready yet.
///
/// When a function returns `Pending`, the function *must* also
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative design that should be at least mentioned in the alternatives section is to make Poll::Pending wrap some "I have scheduled my task!" token, which a future can obtain either from Poll::Pending being returned from one of its children futures or by explicitly constructing such a token (using some method/API whose naming would make it painfully clear it's not okay to just call it without actually scheduling the task to be woken up by some external mechanism), yay learnability, yay typesystem. This was proposed (and, I assume, rejected) before, so it should make sense to link to that discussion from this RFC.

Copy link

@HadrienG2 HadrienG2 Apr 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, it is really nice when strong "you have been warned" wording in comments can be replaced with a little type hack which ensures that the API simply cannot be used incorrectly.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, incorrect usage is still possible if you have multiple children futures and forget to poll some of them, but seems much less likely.

@sbstp
Copy link

sbstp commented Apr 6, 2018

@aturon Context is a lower-level API used inside of manual Future::poll implementations, whereas Executor is a higher level API used outside of Future::poll. So the Context::spawn method is not available everywhere Exectutor is, at least to my understanding.

Copy link

@HadrienG2 HadrienG2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although my comments may sound like a very confused "Hey, what's going on?", they are only about making sure that the motivations and design of whatever gets engraved in RFC stone is made crystal clear.

I'm actually pleasantly surprised by how far Rust's futures have progressed towards being something that one can reasonably describe on a ~1k line budget. Good job!

/// the associated task onto this queue.
fn wake(&Arc<self>);
}
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to clarify this part a bit more. At least two points are currently unclear for the Tokio-uninitiated:

  • The description "Executors do so by implementing this trait." suggests that the Wake trait is directly implemented by an Executor impl. However, it must actually be implemented by a proxy type spawned by the executor, otherwise "wake()" wouldn't be able to tell which task must be awoken.
  • Along the way, a word or two could probably be added on the rationale for this complex self type (Why is a shared reference to an Arc needed? Why is &self not enough?).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along the way, a word or two could probably be added on the rationale for this complex self type (Why is a shared reference to an Arc needed? Why is &self not enough?).

This is something I was wondering.

This RFC describes what the proposed API is, but it doesn't do a very good job of explaining why the API is the way it is. Why do we need these Arcs? It describes an alternative with an unsafe interface that acts essentially like an Arc with shared ownership through cloning, and mentions the ArcObj alternative, but it doesn't really spell out why you need this Arc-like behavior in the first place.

I think it would help to describe a little bit more about exactly how this executor/waker interaction is supposed to work; possibly with an example of a very basic, simple executor which is a single-threaded event loop, and if that doesn't fully clarify why you need Arc then describe why it is that more powerful thread-pool based executors need this Arc-like behavior.

Copy link

@HadrienG2 HadrienG2 Apr 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading the RFC a couple more times, I think I have finally reverse-engineered in my mind why one would want to use an Arc here.

  1. Context only takes a shared reference to a Waker, so all it can provide to the executing future is a shared reference.
  2. But the future may potentially need to be awoken from a different, asynchronous code path, in which case there is a need for an owned Waker value that can be moved around.
  3. There should thus be a way to clone the Waker and potentially send the clone to a different thread, ergo, we need an Arc.

Of those, only point 1 is somewhat controversial. We could have imagined an alternate design in which the Context owns a Waker, and one can get it by consuming the Context (after all, do you still need that context if you are going to suspend the future?).

But this means that a new Waker must be created every time a future is polled, even if it is unused, and atomic increments/decrements are not cheap so we don't want to do that. Alternatively, we might provide Context with a way to create a Waker, but that would ultimately get quite complicated.

So considering that there are use cases for futures that can be awoken from multiple places (anything that combines multiple futures into one: join(), select()...), an Arc was probably deemed to be the most appropriate implementation.

Definitely not obvious, and still does not explain why this implementation needs to leak in the interface all the way to the self type. I agree with you that this RFC would benefit from explaining a bit more the "why", rather than mostly the "what".

/// This function is unsafe to call because it's asserting the `UnsafeWake`
/// value is in a consistent state, i.e. hasn't been dropped
unsafe fn wake(self: *mut self);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, please clarify to the tokio-uninitiated why raw pointers are needed here and more usual self types would have been ineffective.

pub fn wake(&self);
}

impl Clone for Waker { .. }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand the purpose of Waker, you will probably want to assert that it is Send somewhere for completeness (or just mention it in the documentation).

/// means that `spawn` is likely, but not guaranteed, to yield an error.
fn status(&self) -> Result<(), SpawnError> {
Ok(())
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API seems intrinsically racey, as its careful doc admits. What is its intended use? To avoid moving Tasks into an Executor without good reason?

pub fn is_shutdown(&self) -> bool;

// additional error variants added over time...
}
Copy link

@HadrienG2 HadrienG2 Apr 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current definition of SpawnError, failing to spawn a Task because an Executor with a bounded queue is overloaded means that said Task is lost forever. Is that intended? If not, you may want to provide a way to retrieve the Task which failed to spawn.

```

As stated in the doc comment, the expectation is that all executors will wrap
their task execution within an `enter` to detect inadvertent nesting.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What can go wrong in this case? Why is it detected and reported as an error?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API grew from our experience of users using a blocking executor inside a future that is itself being executed on some blocking executor. In most cases, the app would just deadlock, since the future.wait() would call park the thread until it was ready, when the only way it could know it was ready was epoll on the same thread.

If we could detect blocking operations of all kinds while in the context of an Enter guard, that'd be amazing!

Copy link

@HadrienG2 HadrienG2 Apr 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is an attempt at minimally phrasing this rationale in the doc comments or the RFC.

"Doing so ensures that executors aren't accidentally invoked in a nested fashion. When that happens, the inner executor can block waiting for an event that can only be triggered by the outer executor, leading to a deadlock."

(I would also love for OSes to stop overusing blocking APIs so much. When even reading from or writing to a memory address is a potentially blocking operation, it gets kind of ridiculous...)


```rust
trait Executor {
fn spawn(&mut self, task: Future<Item = ()> + Send) -> Result<(), SpawnError>;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you are talking about taking dyn by value, shouldn't there be a "dyn" keyword here? (In general, use of this keyword is somewhat inconsistent in this RFC)

///
/// This method will panic if the default executor is unable to spawn.
/// To handle executor errors, use the `executor` method instead.
pub fn spawn(&mut self, f: impl Future<Item = ()> + 'static + Send);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this signature inconsistent with that of executor::spawn()?

/// threads or event loops. If it is known ahead of time that a call to
/// `poll` may end up taking awhile, the work should be offloaded to a
/// thread pool (or something similar) to ensure that `poll` can return
/// quickly.
Copy link

@HadrienG2 HadrienG2 Apr 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may or may not want to clarify that a Future-aware thread pool is needed here. Most thread pool libraries currently available on crates.io either do not provide a way to synchronize with the end of the computation or do so by implicitly blocking, neither of which is appropriate here. Only some thread pools will be usable here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to offload blocking work to any kind of thread pool, and "synchronizing" the result back to a future, using something like futures current oneshot channel:

You send the arguments of your computation, and a oneshot::Sender<ReturnType>, to the other thread. Once computation is completed, you'd call tx.send(ret). In your Future, you'd just await the result from the oneshot::Receiver.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, this can be done with any kind of fire-and-forget API (and I think this is how current futures-aware thread pools are implemented). It may not be immediately obvious, however, and will not work with blocking APIs (scoped_threadpool, rayon...).

Again, I'm not 100% sure if clarifying this part is needed, just thought it would be another possible area for design documentation improvements.

[guide-level-explanation]: #guide-level-explanation

The `Future` trait represents an *asynchronous* computation that may eventually
produce a final value, but don't have to block the current thread to do so.
Copy link

@HadrienG2 HadrienG2 Apr 6, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The single most common futures-rs beginner mistake is to assume that a task is spawned in the background as soon as a future is created. So right from the introduction, we may want to stress the fact that Rust's futures are executed lazily, and not eagerly as in most other futures implementations.

Here is a possible wording tweak that subtly stresses this point: "...an asynchronous and lazy computation...".

Also, there is a small don't -> doesn't typo in there.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add #[must_use] then?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I definitely think so. I cannot think of any valid use case for creating a future and dropping it afterwards without having done anything else with it, and it is a common mistake.


## Prelude

The `Future` and `FutureRes` traits are added to the prelude.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are they added to the prelude?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least, could they be moved to std::futures::prelude (or whatever module they will be in)?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have reservations about this. Could this section be extended with some rationale?

Copy link
Contributor

@Centril Centril Apr 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To provide some context for prelude-or-not evaluation (I don't have an opinion here yet), here's a snippet from std::prelude:

The prelude is the list of things that Rust automatically imports into every Rust program. It's kept as small as possible, and is focused on things, particularly traits, which are used in almost every single Rust program.

So we should decide whether Futures are "used in almost every single Rust program".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say that most Rust programs would not need Future or FutureRes. Not even all async programs will need it (if you use mio directly, you have no need for Future).

The worse aspect is that I anticipate including this in prelude to cause problems, as I described in this comment.

@carllerche
Copy link
Member

carllerche commented Apr 7, 2018

Given that futures 0.2 has already been released and has not had any significant usage yet, it seems very rushed to include this in core, especially in the prelude and especially given how many fns are on Future and FutureRes.

edit: Removed "I am not going to get involved in the details of this RFC" because that clearly wasn't true.

}
```

We need the executor trait to be usable as a trait object, which is why `Task`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like the additional information about Task isn't actually in the no_std section. Could it be expanded upon, and why it exists instead of Box<Future>? (I realize the reason is because there is no Box in no_std.)

subtrait for that case, equipped with some additional adapters:

```rust
trait FutureRes<T, E>: Future<Item = Result<T, E>> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be spelled out? FutureResult?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should; We don't use these kinds of short forms elsewhere in libstd, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FutureResult sounds like something that comes out of a Future (like LockResult).

BikeShed::new("FallibleFuture").build()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@durka agreed, but why not just reverse the order, ResultFuture?

subtrait for that case, equipped with some additional adapters:

```rust
trait FutureRes<T, E>: Future<Item = Result<T, E>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would trait aliases be a possible way of doing this, instead of a separate trait? Looks like it continues to see progress.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this mean that every corresponding method would have to be added to the Future trait with a where Self: FutureRes bound? Or is there a cleaner way to do this with trait aliases?

@carllerche
Copy link
Member

carllerche commented Apr 7, 2018

To provide more context on my reservation for adding Future and FutureRes to prelude, in my personal usage of futures, I virtually have never encountered the need for futures w/o an error type (this isn't to say that these cases don't exist, just that they are exceedingly rare when working in the context of I/O).

By removing the associated Error type, it now becomes more difficult to work with future bounds that need to include generic errors. For one example, see here, but in my work, doing so is pretty common.

Now, one way to escape situations where generics get out of hand is to define convenience traits w/ a bunch of associated types that are more specific + a blanket impl for the more generic trait that got out of hand.

For example, in advanced cases, working w/ tower's Service trait can get complex. So, tower-h2 has a more specialized version with a blanket impl for the write T: tower::Service here.

I would anticipate that, if Future loses the default Error type, having to reach for such a pattern w/ Future will become more common. The fact that FutureRes is needed here is indication that having to use these tricks will be common. However, if Future and FutureRes are in prelude, this will be prevent being able to do such a trick.

Ideally as @seanmonstar mentioned, something like trait aliases would be used instead of FutureRes. However, either way, I ask that these traits not be added to prelude to enable being able to reach for more specialized Future traits.

As a simple example, lets say that one is working with futures that are always Result<_, MyError>. It would be pretty convenient to define:

trait MyFutRes<T>: Future<Item = Result<T, MyError>> {
    fn map<U>(self, f: impl FnOnce(T) -> U) -> impl MyFutRes<U>
        { .. }
}

The problem is that this convenience trait would not be usable (unless I'm missing something) if Future is in the prelude. This is because any type that implements MyFutRes also implements Future and using the map fn becomes ambiguous.

So, I ask that Future and FutureRes not be included in prelude and instead, perhaps, a std::futures::prelude or a std::async::prelude are created instead. This would make Future and FutureRes similar to io::Read and io::Write, which are not in the prelude either. Instead, they are in std::io::prelude.

@carllerche
Copy link
Member

@aturon

One note concerning vetting and stabilization, which is very important: the basic shape of the futures API changes (regarding the task system) have been vetted for a couple of months on 0.2 preview releases, and integrations.

Could you elaborate more on this? I haven't personally seen anything exercising the 0.2 changes at a significant level. I would be eager to be able to have some samples to dig into showing how the API changes shake out in practice.

where E: BoxExecutor;

/// Get the `Waker` associated with the current task.
pub fn waker(&self) -> &Waker;
Copy link

@HadrienG2 HadrienG2 Apr 7, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wakers are about waking up futures that fell asleep, normally from a different stack frame than the one in which the future was polled. So a &Waker reference that is tied to the stack frame on which Future::poll() is being called does not sound very useful, and in every case that I can think of, people will want to call context.waker().clone().

If that is the general use case, shouldn't waker() just return an owned Waker directly?

@est31
Copy link
Member

est31 commented Apr 7, 2018

I think everyone who wrote futures using code immediately saw major ergonomic issues. Improving this reasonably is a really good idea.

This RFC seems like a good idea in principle, given that:

a) the async programming pattern is important for a systems programming language
b) This RFC enables improved language syntax that means ergonomics improvements for this important programming pattern
c) it is a minimal change, meaning that it won't make all Rust programs, related to futures or not, suddenly start a tokio reactor or something

Doubling down on this "add X to libcore" path is not a good idea though. E.g. I don't think that the failure crate should be added, or bindgen or something. Those are nice crates and do really well as crates, so no need to add them to the language.

Also, I think that @carllerche 's points on the schedule should be taken seriously. We shouldn't stabilize something that hasn't been used widely and couldn't mature. Futures shouldn't become the macros of the 2018 edition, stabilized in a rush to get something out of the window but with flaws left and right.

@aturon
Copy link
Member Author

aturon commented Apr 21, 2018

RFC updates!

As per discussion above, I've made several updates:

  • I've renamed FutureRes to FutureResult for the time being. I recognize that we will probably want to find a shorter name here, but want to nail down the basic shape first.

  • I've updated the proposal to match the final desired version of the FutureResult trait, i.e. with both associated types and a super trait bound. As I detailed in my earlier comment, we have various techniques we can use to reach that design through incremental stabilization, depending on how the Chalk work progresses.

  • I've made some minor tweaks to the Executor setup based on my experience implementing 0.3 so far. This is mostly just a refactoring of how we handle forward-compatibility with unsized rvalues.

  • I've added some text recapping the mechanics around Pin (which had been discussed elsewhere), and referenced @Nemo157's excellent comments giving some concrete examples.

  • I've revamped the stabilization section to try to be more clear about the steps involved.

  • I've added significant new text talking about the rationale for changes compared to 0.2.

I believe that the proposal now reflects the current rough consensus on thread, modulo two things: some bikeshedding, and some remaining discomfort around Pin. I'll be posting a follow-up comment on the latter shortly.

@HadrienG2, I still intend on providing fuller rationale in the places you asked for it, but haven't prioritized this since most of them are about existing aspects of the 0.2 design.

Finally, I want to note that the async/await RFC is heading toward FCP. This is partly because there are relatively few open questions about the design, but more importantly because we are eager to begin the experimental phase with these features, to provide maximal time for feedback before any stabilization is proposed. These RFCs are interdependent, so I would like to push toward FCP here in the near future as well, with the very clear understanding that the proposed design is preliminary and we've taken care to provide ample time to gain real-world experience on crates.io before shipping.

@aturon
Copy link
Member Author

aturon commented Apr 21, 2018

@seanmonstar Here's a belated reply to your concerns around Pin.

Fundamentally, a Future is a type that will return a value at some later point. That's all the trait should be about. Being pinned is, frankly, unrelated to returning a future result.

The way I think about it is more analogously to the distinction between self, &self, and &mut self. All of these tell the caller what rights must be owned in order to perform an operation. Pin adds to this list by offering a variant of &mut that can be used to restrict the caller from moving the internal data. (However, if you implement Unpin for your structure, Pin just is &mut).

The only time I'd care about a pinned future is when all of these are true:

  • I want to use async fn (and not implement Future for a type).
  • I want to borrow bindings across await! calls.

If I'm not doing those, then I don't need to be pinned. Yet, the current proposal requires that if I ever want to implement Future, I must always think about pinning. (emphasis mine)

That last statement is incorrect. This has been said elsewhere on thread, but to reiterate: the Unpin auto trait is specifically designed for the case where you don't want to make use of these new borrowing abilities; in this case you can freely convert between Pin and &mut, which means that the futures API is effectively unchanged for you.

However, there is another case to consider, which is when you're writing a future that doesn't use internal borrows itself, but does contain arbitrary user futures. I talk about that case in detail in the updated RFC (and we've also talked about it on IRC). TLDR: there's a trivial safe option that gets you back to the &mut story but with a potential performance penalty, and various paths for building safe abstractions that avoid the cost.

It might be worth re-reading @withoutboats's blog series about pinning if any of this is unclear.

As mentioned in other comments, if all futures must be pinned, that makes other things much harder (even if they didn't actually need the pinning).

Again to clarify: it is not the case that all futures must be pinned.

Select doesn't have a proposed solution.

See @Nemo157's comment proposing the canonical solutions.

Things like poll_ready would be unusable if they needed to poll some internal future. poll_ready would need to be changed as well to take a Pin, which means that the Service is then immovable.

This basically boils down to the same story as above: if all the futures are yours, they can all implement Unpin (and usually will have that impl automatically). If the future comes from your user, then you can follow one of the outlined strategies for that case.

Consider the Iterator trait, and how it currently takes &mut self. There is the goal of eventually allowing generators to be used as iterators. How is that going to work?

The use of generators and the issues around pinning are separate. In particular, for both generators and async fn, we can provide two different modes of operation:

  • Unpinned, which doesn't allow borrowing across yields, but which does implement Unpin
  • Pinned, vice versa

For async fn, the default very clearly wants to be pinned. For generators, the default probably wants to be unpinned.

It's an open question what this will mean in terms of traits, but the situation with iterators is very substantially different from that of async fn, both in terms of what defaults are appropriate, and of what kinds of compositions are desired. I don't think the two design spaces have much insight to offer each other, sadly.

Some combination of sub traits and specialization.

One thing I wanted to mention: it seems quite plausible to offer an optimized form of "pin safety", where if T: Unpin you get T back, but if T: !Unpin you get PinBox<T> back. This would help further reduce any performance issues around the naive way of dealing with unknown futures.


I hope the above helps allay your concerns. But I also think that the best thing we can do here is write code! I've already made significant progress on the futures 0.3 branch and have a PR up for the combinators. For the latter, I made no attempt to build safe abstractions around pinning, but it's already very clear that there are just a couple of basic patterns we can likely encapsulate.

My hope is to publish a futures 0.3-alpha in the very near future -- most likely by the end of next week. As outlined in the RFC, that version would only work on nightly Rust, but should otherwise give an accurate impression of what these APIs are like to work with. Ideally, with cooperation from the Tokio and Hyper maintainers, we can land 0.3-alpha integration behind a flag (or through some other means) and begin to get real-world experiences with these questions ASAP.

I think we should, at the same time, move aggressively to land both this RFC and its companion; as I explained in the previous comment, the motivation is to get as much experimental code in place as we can, so that we can support stabilization discussions down the road with more data.

literally the `async` version of the usual `read`:

```rust
async fn read(&mut self, buf: &mut [u8]) -> io::Result<usize>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a more specific example can be included, one that cannot be implemented today? An async function returning a future that borrows both self and buf already is possible, for example (beta just used for impl Future).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This specific function can't actually be used today (at least not without lots of unsafe code), since the future return type is bound to the lifetimes of the input types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, can you expand on that? The link shows the function actually is used, being put in the block_on.

Do you mean you can't really return this future from another function, since the TCP stream and buffer would be dropped? That's true, but the same problem exists with async fn.

@seanmonstar
Copy link
Contributor

seanmonstar commented Apr 21, 2018

First of all, let me clarify: I really appreciate the thought and work that's been put into this so far. I don't mean for my criticisms to be taken personally, and regret that I feel I need to dig into the problems I see with such detail.


Everyone gets a Pin

Yet, the current proposal requires that if I ever want to implement Future, I must always think about pinning.

That last statement is incorrect.

This statement is also incorrect... I did end up meaning two things with my statement, so I'll clarify.

  1. With the definition of the method being fn poll(self: Pin<Self>), I absolutely must think about what a Pin is. I have to type it, and if I've never used it before, I must go educate myself on the horrors joys of what could happen if I treat a Pin incorrectly.
  2. Additionally, if I'm try to build off a user's Future, I must either pick: pray that I don't trigger UB, universally Box your future, making my combinator slower, or only accept Future + Unpin and tell anyone with an async fn to go away.

It doesn't really matter that Unpin things can be popped out of their Pins, since I must deal with a Pin always.

it is not the case that all futures must be pinned.

Indeed, it is. You cannot call Future::poll without a Pin, so I must pin it. In the Service trait, if poll_ready is to maybe poll futures (and so far, we definitely do have Services that do this), then poll_ready must take a Pin, and so the Service must be pinned.

If Sinks are moving towards the same pattern, of having a poll_ready function and then you just send when it's ready, then the same problem exists there. You can't move a sink around, unless you know the exact type is Unpin, in which case you can Pin::new, poll_ready, and then pop the pin. But if you're generic over Sink, too bad.

It might be worth re-reading @ withoutboats's blog series about pinning if any of this is unclear.

I'd read the series a few times, which is what lead me to my concerns originally. To be fair, I really did go back and read all 6 entries before writing this, in case I had missed something. My concerns remain the same.

Select

See @ Nemo157's comment proposing the canonical solutions.

I went back and checked to see if I had missed a solution, and found that there still not a satisfactory one. There's two options:

  1. If you're just generic over Future, then select doesn't allow you to get the second future back, period.
  2. If you tell async fn users to go away (generic over Future + Unpin), then you can have normal select. Users of an async fn must use PinBox, and likely instead will file issues that they don't want that.

Select is not just used for timeouts. I can think of a few places in just hyper that uses select to actually race some futures.

  1. The client races a connect future with checkout from the Pool. If a pooled connection is ready before connect, then it uses that. We'd still want to then be able to move the incomplete connect future into a new task, to insert into the Pool once it finally completes.
  2. Connections race against a graceful shutdown signal. If the shutdown future finishes first, we want to trigger the connection to start shutting down (like send GOAWAYs), which requires getting the future back.

Select would also be used often when making Service middleware.

We still could use the solution for Generators/Iterators

For async fn, the default very clearly wants to be pinned

I personally haven't seen a ton of evidence that this is the case. But even with that evidence, it seems like the actual point I made isn't addressed. The problem does exist for trying to implement Iterator for a Generator. How will that be solved, and why is a bad choice for async fn?

The implication sounds like Generators would just by default be boxed, but it wasn't said explicitly.

Pin dilutes unsafe, and is hard to reason about

Another "problem" I have with Pin everywhere is that it dilutes the unsafe keyword. For example, if we look at the combinators PR you mentioned, we see an explosion of unsafe everywhere. And even there, select and join are punted because reasoning about Pin is really frickin hard. I'm not sure I want to support async fns if it requires implementors to "just use this unsafe function, it's not really unsafe".

It's not clear to me that it really is safe. Watching the conversations happening in the #futures IRC channel, it is very tricky to determine if an operation is safe with a Pin. I'm certain that we'll find some part of it is actually unsafe. It will definitely happen. For instance, at the moment in libcore, there is impl Deref for Pin, but this is likely unsound, since things like RefCell, Mutex, RwLock, and the like can move around their internals through a &T. I'm sure there are other things that will cause UB that we didn't think of until it's shipped in a stable version. (I admit that this by itself isn't a strong reason against, since by itself, it would mean we shouldn't even have the unsafe keyword, because we're bound to use it incorrectly. Still, I think Pin is a lot harder to reason about, especially when for years we've lived with a Rust where you can always move everything.)

You cannot put a !Unpin type into a Vec, or a HashMap, or most other collections, since as soon as they need to grow, they will have moved the type to a new part of the heap. That isn't clear at all. The proposed solution is to provide PinVec, PinMap, etc, but that doesn't feel great, and someone writing Rust likely doesn't even think it could be unsafe to use it with a standard collection, and that they need a special pin one.


I'm sorry, maybe I'm just being dumb. Maybe with a whole lot more code, it can be made clear that Pin is the winner. But from what I've seen so far, I'd almost rather that any async fn just unilaterally throw it in a Box itself, and not bleed its unsafety everywhere.

@@ -808,4 +905,4 @@ model.
# Unresolved questions
[unresolved]: #unresolved-questions

None at present
- Final name for `FutureResult`.
Copy link
Contributor

@MajorBreakfast MajorBreakfast Apr 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Names suggested so far:

  • FutureResult
    • Pro: Word Future is in front, same word oder as Iter and IterMut
  • ResultFuture
    • Pro: The word Future at the end emphasizes that every ResultFuture is a Future
  • FallibleFuture
    • Pro: The word Future at the end emphasizes that every FallibleFuture is a Future
    • Pro: Alliteration
    • Con: Not as descriptive: No indication that the Result type is involved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option: TryFuture. In the same way that TryFrom is the fallible case of From, and the Try trait is the generalization of "a thing that might fail."

@glaebhoerl
Copy link
Contributor

glaebhoerl commented Apr 21, 2018

If you're just generic over Future, then select doesn't allow you to get the second future back, period.

FWIW, this is what @canndrew was talking about above. If we had a reference type which takes ownership of its referent, an &own Future (or, actually, PinOwn<Future>; or &own Pinned<Future> per the alternative approach in the internals thread) could be passed into select and also returned from it without moving the future itself.


With respect to the bikeshedding: I want to refloat my suggestion from before of calling the trait Async ("an async fn returns an impl Async"), and AsyncResult sounds alright as well.

@Nemo157
Copy link
Member

Nemo157 commented Apr 21, 2018

FWIW, this is what @canndrew was talking about above. If we had a reference type which takes ownership of its referent, an &own Future (or, actually, PinOwn; or &own Pinned per the alternative approach in the internals thread) could be passed into select and also returned from it without moving the future itself.

In what way does this differ from stack pinning, it's still just a reference so can't be returned out of the stack frame that created it, correct? Here's an extension of my earlier select examples showing how it works with stack pinning, the most relevant part:

stack_pin(ImmovableCountdown { count: 3, result: "foo" }, |left| {
    let mut bar = SelectUnpin {
        left: Some(left),
        right: Some(Countdown { count: 2, result: "bar" }),
    };
    println!("{:?}", Pin::new(&mut bar).poll());
    println!("{:?}", Pin::new(&mut bar).poll());
});

results in

Pending
Ready(Right(("bar", ImmovableCountdown { count: 1, result: "foo" })))

by pre-pinning an immovable future you can pass it in and receive it via reference (a Pin reference to be exact).

There's no standard stack pinning api available yet, but I'm fairly confident that something will be provided. (One major blocker is that a closure based api wouldn't work in async fn because of the need to do non-local returns yields through it, I've experimented with using closures returning layered generators instead, but my last attempt was hitting an ICE.)

@carllerche
Copy link
Member

I have to agree with @seanmonstar re the amount of additional unsafe blocks. It should not be a requirement to add unsafe to combinators period. Having to do so is a huge red flag to me.

@rpjohnst
Copy link

You only need to add unsafe when you use a totally new capability- pinned values. And pinned values only need unsafe because they are a newly-implemented library feature. These operations can (and should) be factored into std along with Pin, and potentially integrated into the language at some point.

@MajorBreakfast
Copy link
Contributor

MajorBreakfast commented Apr 21, 2018

There should be an “unused_must_use” warning for futures like it exists for Result. Futures in Rust are lazy, this means that a future that isn't polled won't be executed. I think it is safe to say, that if we create a future, we also intend to use it.

This helps in normal code that uses futures, but also with forgotten await in async functions:

async fn get_num() -> i32 () { 42 }

async fn my_fn() {
  get_num(); // Warning: "unused_must_use"
  await!(get_num()); // Correct

  get_num().is_positive(); // Error: Method does not exist
  await!(get_num()).is_positive(); // Correct
}

@rpjohnst pointed out in https://internals.rust-lang.org/t/explicit-future-construction-implicit-await/7344/66 that lazy futures and explicit await have this problem: If the user calls a function, but forgets the await, nothing happens and there might be no indication that something is wrong.


```rust
// A future that returns a `Result`
trait FutureResult: Future<Output = Result<Self::Item, Self::Error>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would that work without causing error[E0391]: cyclic dependency detected error message?

@BigBigos
Copy link

If I understand correctly, the only real need for unsafe when dealing with Pin<T> in combinators is to get a Pin<U> from Pin<T> where U is a field inside T. This is what the suggested #[pin_accessor] generates a function for.

If I look at Pin<T> as another reference kind (next to &T and &mut T), it looks like it lacks transitivity when accessing a member. If we have:

struct T(U);

Then:

impl T {
    fn get(&self) -> &U { &self.0 }
    fn get_mut(&mut self) -> &mut U { &mut self.0 }
}

but:

impl T {
    fn get_pin<'a>(self: Pin<'a, Self>) -> Pin<'a, U> { ?? self.0 }
}

doesn't work. This asymmetry between references and Pin<T> seems to be the main cause for the unsafe code.

If Pin<T> were to be promoted to a reference type, alongside &T and &mut T, we would be fine:

impl T {
    fn get_pin(&pin self) -> &pin U { &pin self.0 }
}

or to be more similar to Pin::map:

impl T {
    fn get_pin<'a, 'b>(self: &'b mut &'a pin Self) -> &'b pin U { &pin self.0 }
}

I understand that implementing that would require compiler changes, which we can't afford to depend on, though.

@carllerche
Copy link
Member

@rpjohnst

You only need to add unsafe when you use a totally new capability- pinned values.

The "totally new capability" in this case is upgrading the futures crate. i.e. if any existing libraries that currently depends on futures 0.1 migrates to futures 0.3, they will almost certainly require an explosion of unsafe where there previously had not been a single block.

This is because the foundational trait requires Pin.

In order to avoid going from zero unsafe blocks to many unsafe blocks, forking Future will be required.

@rpjohnst
Copy link

In order to avoid going from zero unsafe blocks to many unsafe blocks, forking Future will be required.

Not if, as @aturon claims, "it's already very clear that there are just a couple of basic patterns we can likely encapsulate." Unless he's completely wrong somehow, it should be possible to a) continue using zero unsafe blocks in code that doesn't care about Unpin, and b) encapsulate those basic patterns for code that does.

@aturon
Copy link
Member Author

aturon commented Apr 23, 2018

@seanmonstar Thanks much for the detailed reply!

I don't mean for my criticisms to be taken personally, and regret that I feel I need to dig into the problems I see with such detail.

Just since it's worth saying: this is what the RFC process is for! Digging into these details is ultimately what will help us synthesize a solution that works well for everyone, and I appreciate the time you're putting into it. The ideas I discuss below were generated specifically by push-back from you and others.

Rather than respond point-by-point, I want to propose a different approach that meets the same end goal (of providing support for async/await with borrowing), but with a more conservative take on the library ramifications.

Supporting async/await, take 2

The basic idea is simple, and is along the lines of comments @seanmonstar has made before:

  • Use a Pin-based trait (called Async) for async blocks.
  • Provide a &mut self trait (called Future), with a boxing conversion from Async.

Only Async goes in the standard library; Future remains defined in the futures crate.

Libraries can use either trait, and in particular can upgrade to futures 0.3 without moving to Async. However, libraries can gradually move to Async to avoid the need for boxing when crossing the "boundary" between Async and Future.

The key point is that the more efficient pin-based APIs become opt in rather than opt out as they are in this RFC (by using Async rather than Future). This makes it possible to reap the benefits of async/await immediately, while giving us unbounded time to fully work through the ramifications on the ecosystem.

I suspect that, in the long run, we will develop tools that make working with Pin just like working with any other reference type, either through custom derives or through language improvements. But we can decouple that work from the other aspects of this proposal, and allow for a more gradual shift.

The story for libcore

Take the Future trait from this RFC, and rename it to Async (to coincide with async blocks, its primitive constructor in the language). So, in core, we add:

trait Async {
    type Output;
    fn poll(self: Pin<Self>, cx: &mut task::Context) -> Poll<Self::Output>;
}

trait IntoAsync {
    type Async: Async;
    fn into_async(self) -> Self::Async;    
}

together with the Poll type and task module.

The story for the futures crate

The 0.3 version would contain a Future trait that is pretty close to the 0.2 version (including Item/Error), just with various renamings and tweaks to bring it into line with other changes (e.g. map becomes map_ok as per this RFC, etc).

Under the nightly flag, it would further:

  • Treat Poll, task::* as re-exports from core.
  • Impl IntoAsync for all T: Future, and IntoFuture for all Async + Unpin.

As such, 0.3 would immediately work on the stable Rust compiler, and usage would be almost identical to 0.2. However, once async/await support lands in the compiler, users of nightly would be able to use that support while interoperating with the futures 0.3 ecosystem.

This reduces the pressure to stabilize Pin in the near term to gain experience with the new futures design; everything async/await-related can stay on nightly, while we roll out 0.3 in the ecosystem in parallel.

The long run

Ultimately, we may find either that movable futures are common enough that we want to keep Future, or we may find that we want to deprecate it once working with pins is sufficiently well-understood. This proposal is forward-compatible with either outcome, and in particular keeps Future itself in the crates.io ecosystem to keep our options open.

Thoughts?

I'm curious to hear what folks think about these ideas. Personally, I'm excited at the prospect of being able to start moving the ecosystem toward 0.3 in the near future in parallel with further explorations of pinning and async/await.

@MajorBreakfast
Copy link
Contributor

MajorBreakfast commented Apr 24, 2018

@aturon

Use a Pin-based trait (called Async) for async blocks.
Provide a &mut self trait (called Future), with a boxing conversion from Async.

  • Async: Async blocks/closures/fns, right? I like the symmetry. I never proposed this name change because I know you've been working with "Future" for years and ditching it means no "Back to the Future" references anymore. 😅
  • AsyncResult should be added to the list. It's AsyncResult = Async + Result. It's needed for the ? operator in async functions (Edit: not true, pointed out by @Nemo157). Additionally, the distinction is needed because not all combinators defined on AsyncResult make sense for Async. All on Async make sense for AsyncResult, though.
  • Future: Is it possible to avoid boxing when the type is Unpin? Also, only a conversion between Future and AsyncResult makes sense because Future = AsyncResult + Unpin

I'm curious to hear what folks think about these ideas

  • If you use Future (not Async) for the combinators, their use won't be zero cost anymore because it introduces an allocation. The development of a Pin-aware version might still be necessary to a) get comparison data whether boxing has real performance implications and b) make it work for embedded devices which don't have an allocator and c) because conversion to Future is only possible from Asyncs that are also an AsyncResult, i.e. have an Error type. This Pin-aware version of the combinators would require unsafe code of course.
  • All the RFC work is centered around async/await and its stabilization. This change doesn't make faster stabilization possible because async/await relies on Pin.
  • Do I have to call into_async() and into_future() manually? I think so, because Rust has no polymorphism and certainly no implicit conversions (thank god xD), right?
  • You call it IntoAsync. Into<impl Async> does not work, I assume?
  • I am with @rpjohnst and, well, you @aturon, that introducing pin-aware code in the futures crate that needs some unsafe code isn't a big deal. There are probably various common patterns for which safe abstractions are possible. Also, it really only affects the development of the futures crate. The rest of the ecosystem which just uses the combinators can still use safe code everywhere. I think that carefully vetted unsafe code isn't dangerous at all! Let us just think about what optimizations LLVM makes to the final machine code: If there were bugs in there it'd be very dangerous. But, nobody disputes that the performed optimizations are necessary. They're also safe, because they were carefully reviewed (this is what I assume).
  • Wouldn't it be possible to introduce a dummy Pin type that does nothing as part of the futures crate as a temporary solution for stable? This dummy Pin type wouldn't work with async functions (which aren't available on stable). The rest of the stable ecosystem currently doesn't rely on the real Pin, so a dummy implementation doing nothing isn't a big deal. If you compile a project on nightly on the other hand, this Pin type is the real deal (a reexport) and works with async/await. I think there's also no compatibility risk because no notion of !Unpin is introduced. So, code that relies on !Unpin would fail to compile with this dummy implementation even if you remove #![feature(pin)] (which will be common once pin is stable).

@Nemo157
Copy link
Member

Nemo157 commented Apr 24, 2018

Use a Pin-based trait (called Async) for async blocks.
Provide a &mut self trait (called Future), with a boxing conversion from Async.

This is a very similar design to what we have today, just removing the Resultness of StableFuture and renaming it. From my work with futures 0.2 this is definitely usable in an isolated ecosystem, I haven’t tried integrating it with any existing libraries though.

The development of a Pin-aware version [of combinators] might still be necessary to [...] make it work for embedded devices which don't have an allocator

This is the situation I’m in with futures 0.2 at the moment. I’m having to define my own combinators for StableFuture as needed. This is by far the biggest downside I see to keeping Future as the fundamental trait and having a lower level Async trait under it. Either you don’t provide the combinators, in which case working directly with Async is a pain; or you do provide them, and have to maintain two parallel sets of combinators.

——

@MajorBreakfast

AsyncResult should be added to the list. It's AsyncResult = Async + Result. It's needed for the ? operator in async functions. Future = AsyncResult + Unpin

AsyncResult shouldn’t be needed for async fn to support the ? operator, they just create an Async<Output = Result<...>>. That is then also the basic type signature used for the IntoFuture implementation. It may be useful to provide anyway to allow easier manual implementation, see my final thoughts below on this.

Wouldn't it be possible to introduce a dummy Pin trait that does nothing for the stable futures crate as a temporary solution?

That would make the transition point where Pin actually starts applying very painful, and would probably promote a lot of confusion about what Pin really means.

——

Overall I like this as a next step. It’s basically the smallest possible change from futures 0.2 required to bring async into the compiler. It could also allow for the ecosystem to better experiment with transitioning subsets of crates to using pinning, if there’s a relatively decoupled section that could be changed from Future to AsyncResult and connected to the rest of the library via the boxing conversion.

It could also be used as a transition strategy to avoid having to do a massive upgrade from 0.2 to 0.3; instead, once some experimentation has happened and the final Async api is nailed down, Future could be deprecated and libraries could slowly transition over to Async. Futures 0.4 could then be released as just deleting deprecated parts of 0.3, so warningless libraries would continue to work.

This could actually be done with 0.2 (other than the minor renamings mentioned). All the current async related code is behind the nightly feature so could be removed and replaced with the compiler provided modules once they’re ready.

@MajorBreakfast
Copy link
Contributor

That would make the transition point where Pin actually starts applying very painful, and would probably promote a lot of confusion about what Pin really means.

@Nemo157 You're right. I think this can be mitigated through clear documentation, though. It's a tradeoff either way because not adding pinning to the API means there's going to be another breaking change to the futures crate in the future.


Another thing that I've just thought of: @aturon's proposal postpones the introduction of Future<Output = Result<T, E>>. That change has AFAIK no blockers at the moment, so there's no real reason to postpone it. It could be included in the 0.3 breaking change already, right?

@aturon
Copy link
Member Author

aturon commented Apr 24, 2018

@MajorBreakfast @Nemo157 thanks both for your (as usual!) insightful commentary!

Is it possible to avoid boxing when the type is Unpin?

Yes. For now, we can do this by providing two separate ways of doing the conversion (we'll have to bikeshed which one should be the "default"). Later on, we can do this via specialization using just a single method.

If you use Future (not Async) for the combinators, their use won't be zero cost anymore because it introduces an allocation.

Yes -- sorry I wasn't more clear about this. My plan was to provide combinators on both. For cases like select we likely want to different versions anyway.

All the RFC work is centered around async/await and its stabilization. This change doesn't make faster stabilization possible because async/await relies on Pin.

There are a few considerations here. Most important: this proposal makes it possible to get a futures 0.3 release that works on stable Rust* immediately, whereas with the RFC we have to stabilize Pin first.

Also, it's possible to stabilize async/await with this proposal without stabilizing Pin (I can elaborate on that later, I see it mainly as a fallback).

Do I have to call into_async() and into_future() manually?

No; the ecosystem would bound by T: IntoFuture rather than T: Future, as most of the combinators do today.

I am with @rpjohnst and, well, you @aturon, that introducing pin-aware code in the futures crate that needs some unsafe code isn't a big deal.

Yep. That said, the Pin construct is still relatively new, and it will take some time for us to develop safe abstractions etc. My eye is on the main prize: async/await. I'm trying to carve out the most conservative path to getting there for the 2018 edition, and with this proposal we can get the ecosystem moved over to 0.3 much more quickly, and take more time to deepen our experience with Pin.

Wouldn't it be possible to introduce a dummy Pin type that does nothing as part of the futures crate as a temporary solution for stable?

Possibly, but I agree with @Nemo157 that that probably introduces more confusion than it's worth.

FWIW, I think you can see Future as basically an alias for Async + Unpin. And more broadly, I think that having these different traits for now makes it both easier to gradually transition, and allows for more clear exploration. By the time we're considering stabilization, I suspect we'll have a lot more clarity about where we ultimately want to land.

It’s basically the smallest possible change from futures 0.2 required to bring async into the compiler. It could also allow for the ecosystem to better experiment with transitioning subsets of crates to using pinning, if there’s a relatively decoupled section that could be changed from Future to AsyncResult and connected to the rest of the library via the boxing conversion.

Exactly!

It could also be used as a transition strategy to avoid having to do a massive upgrade from 0.2 to 0.3; instead, once some experimentation has happened and the final Async api is nailed down, Future could be deprecated and libraries could slowly transition over to Async. Futures 0.4 could then be released as just deleting deprecated parts of 0.3, so warningless libraries would continue to work.

Yes!

This could actually be done with 0.2 (other than the minor renamings mentioned).

That's correct. However, given that 0.2 hasn't had significant take-up yet, some of the renamings involved are pretty fundamental, and we may want to make other changes to combinators, it seems prudent to just push forward to 0.3

@aturon's proposal postpones the introduction of Future<Output = Result<T, E>>. That change has AFAIK no blockers at the moment, so there's no real reason to postpone it. It could be included in the 0.3 breaking change already, right?

It could, yes, and maybe should. I was feeling a bit wary of introducing a whole bunch of new traits, especially if in the long run we think Future will be deprecated in favor of Async. If we could just settle the naming question...


If reception continues to be positive, I will close this RFC and open a new one built around this new proposal. Speak up if you strongly object to that! (And note that the end result of both approaches is basically the same, but the new proposal has a gentler migration story.)

@carllerche
Copy link
Member

@aturon Your last sketch is roughly what I had hoped would happen. I am looking forward to the next RFC.

@MajorBreakfast
Copy link
Contributor

thanks both for your (as usual!) insightful commentary

Thanks for the incredible work you all are doing!

  • Getting async functions without stabilizing pin is of course fine as well! I'm looking forward to reading about your approach in the next RFC
  • I'd really like if futures were annotated with must_use. Such an annotation currently only works with types, not with traits, though. (playground link) Can it still be done?

@aturon
Copy link
Member Author

aturon commented Apr 24, 2018

I've opened a new RFC based on the more conservative proposal we've been discussing, and will be closing this one out.

Thanks all for the great discussion so far! I'm happy with where this is heading -- as I argue in the new stabilization section, I believe this updated RFC gives us a path to shipping async/await that is very low-risk.

@HadrienG2, I've also incorporated most of your feedback in this new RFC.

See you all on the new thread!

@aturon aturon closed this Apr 24, 2018
@Kixunil
Copy link

Kixunil commented Apr 25, 2018

I like how this is not being rushed. Better make it as good as possible. :)

@yasammez
Copy link

This sounds really good. I only have the question, how in the meantime I as a user should decide which trait to use: Future or Async and why? If I followed the discussion correctly then it depends if I am !Unpin. Future is the boxed version which always works while Async is the better performing version which requires pinning, yes? For a transition period this seems alright as long as this gets cleaned up before stabilization or is very well documented. Apart from this - albeit very minor - concern I am all for this proposal and look forward to your next RFC (and maybe Blog post; I am greedy). Thanks for the great effort!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T-libs-api Relevant to the library API team, which will review and decide on the RFC.
Projects
None yet
Development

Successfully merging this pull request may close these issues.