Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redesign core architecture, support custom server requests #313

Merged
merged 20 commits into from
Mar 11, 2022

Conversation

ebkalderon
Copy link
Owner

@ebkalderon ebkalderon commented Mar 3, 2022

Added

  • Add support for user-defined JSON-RPC requests via LspService::build().method(...).finish().
  • Client now implements Service<Request, Response = Option<Response>>, just like LspService.
  • Expose concurrency_level setting on Server.
  • Expose simpler jsonrpc::Request object with convenient builder interface.
  • Add .result()/.error() and .is_ok()/.is_error() methods to jsonrpc::Response.
  • Add convenient From implementations for jsonrpc::Id.
  • Expose Loopback trait to allow for mock service/client to be used with Server.

Changed

  • LspService now implements Service<Request, Response = Option<Response>> .
  • LspService::new() now returns a ClientSocket instead of a MessageStream.
  • Server::new() now requires a third ClientSocket argument instead of using .interleave().
  • Rename jsonrpc::Response::{ok, error} to jsonrpc::Response::{from_ok, from_error}.
  • Rename Client::send_custom_{request,notification} to Client::send_{request,notification}.

Fixed

  • Fix Server occasionally stalling by processing client responses separately from client-to-server requests.
  • Return error code -32600 (invalid request) if incoming JSON parses, but isn't a JSON-RPC request or response.

Removed

  • Remove Server::interleave() in favor of a third socket argument to Server::new().
  • Remove jsonrpc::{ClientRequest, Incoming, Outgoing, ServerRequest}.
  • Remove MessageStream.

This redesign has been a very long time coming, but it's finally here! 🎉 This pull request shouldn't affect too much code downstream, except the sacred initialization incantation is slightly different:

- let (service, messages) = LspService::new(|client| ...);
+ let (service, socket) = LspService::new(|client| ...);
- Server::new(stdin, stdout)
+ Server::new(stdin, stdout, socket)
-     .interleave(messages)
      .serve(service)
      .await;

Most importantly, support has been added for custom server requests via the following API:

impl LanguageServer for Foo { ... }

impl Foo {
    async fn request(&self) -> Result<...> { ... }

    async fn request_params(&self, params: ...) -> Result<...> { ... }

    async fn notification(&self) { ... }

    async fn notification_params(&self, params: ...) { ... }
}

let (service, socket) = LspService::build(|client| Foo { ... }) // Wow! :heart_eyes: 
    .method("custom/request", Foo::request)
    .method("custom/requestParams", Foo::request_params)
    .method("custom/notification", Foo::notification)
    .method("custom/notificationParams", Foo::notification_params)
    .finish();

In the process, this pull request also lands a slew of architectural improvements (under the hood and otherwise) and fixes a long-standing critical concurrency bug. Client now finally implements tower::Service, making it possible to fire off requests and notifications at the client "the Tower way". The jsonrpc module has seen much consolidation too, with a new and improved Request type replacing Message, ClientRequest, and ServerRequest. Finally, as part of the ground-up rewrite of Server, the server's concurrency level can now be customized by the user, if desired.

I didn't originally plan on all of these improvements landing in a single PR, but they nonetheless arose as a consequence of rewriting the internal JSON-RPC router to support custom requests.

Closes #226.
Closes #256.
Closes #177.
Partially addresses #284.

I'd love any feedback on this before merging, @silvanshade @Timmmm @IWANABETHATGUY @samscott89 @lgalabru! ❤️

Sorry for the noisy pings, but I'd appreciate any willing real-world testers.

@ebkalderon ebkalderon self-assigned this Mar 3, 2022
@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch 3 times, most recently from 5c5feb1 to 497ad29 Compare March 3, 2022 09:22
@ebkalderon

This comment was marked as resolved.

@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch 7 times, most recently from ed280bc to 534d770 Compare March 3, 2022 20:04
@ebkalderon
Copy link
Owner Author

ebkalderon commented Mar 4, 2022

Ahh, looks like 7920a99 cannot be implemented right now on "runtime-agnostic" because async-codec-lite has a private Error type that cannot be matched upon directly. There should be no need for this to exist if <T as Decoder>::Error already has a From<std::io::Error> bound, though. 🤨 I'll see if I can open up a pull request to fix that. In the meantime, I'll drop that commit from this PR.

EDIT: Actually, I realize now that I should be able to retrieve the source and then downcast that into ParseError. It's a bit ugly, but it should work still. Restored in d5bb6f9.

@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch 4 times, most recently from 0850ebc to d5bb6f9 Compare March 4, 2022 03:48
@silvanshade
Copy link
Contributor

Ahh, looks like 7920a99 cannot be implemented right now on "runtime-agnostic" because async-codec-lite has a private Error type that cannot be matched upon directly. There should be no need for this to exist if <T as Decoder>::Error already has a From<std::io::Error> bound, though. 🤨 I'll see if I can open up a pull request to fix that. In the meantime, I'll drop that commit from this PR.

Okay, I'll see if I can implement a change for that but your solution seems reasonable for now.

@silvanshade
Copy link
Contributor

These changes sound great! I only skimmed through the source diffs so far but I think all the proposed changes make sense given previous discussions around the various issues.

@IWANABETHATGUY
Copy link
Contributor

impl LanguageServer for Foo { ... }

impl Foo {
    async fn request(&self) -> Result<...> { ... }

    async fn request_params(&self, params: ...) -> Result<...> { ... }

    async fn notification(&self) { ... }

    async fn notification_params(&self, params: ...) { ... }
}

let (service, socket) = LspService::build(|client| Foo { ... }) // Wow! :heart_eyes: 
    .method("custom/request", Foo::request)
    .method("custom/requestParams", Foo::request_params)
    .method("custom/notification", Foo::notification)
    .method("custom/notificationParams", Foo::notification_params)
    .finish();

in the demo above, what is type of params, serde_json::Value ?, or some struct have serde::DeSerialize proc_macro

@IWANABETHATGUY
Copy link
Contributor

Please ignore me, i don't see the doc.
By the way, your custom request impletation is awesome, thanks for you work!

@samscott89
Copy link

I tried this branch out on the code I was working on that I mentioned back in #284 (comment).

As far as I can tell, this seems like it would have fixed the issue I had. So that's cool!

This new data model is significantly more generic than the old one, with
straightforward `Request` and `Response` types capable of representing
any valid JSON-RPC 2.0 message, rather than artificially restricting the
scope only to LSP request types. This opens the door to supporting
user-defined custom client-to-server requests. These slimmer data types
should also consume less memory per message too.
It seems that even with this rewrite of `Server` there are still
concerns around stalls occurring when `max_concurrency` long-running
server tasks are spawned, causing backpressure. This stops the
`read_input` task from advancing forward, thereby preventing any
`$/cancelRequest` messages sent by the client from being received.

Unlike the code present in the `master` branch today, this situation
heals itself over time as the buffered futures eventually resolve and
return responses, freeing `read_input` again to continue advancing.
However, if any of these futures happen to be waiting to receive inbound
responses from the client, these futures will block forever and cause
the transport to stall once again.

A reproducible example of this issue can be created by adding this line
to the `execute_command()` handler in `examples/stdio.rs`, just before
the `self.client.log_message` call:

```
tokio::time::sleep(std::time::Duration::from_secs(10)).await;
```

and then in your LSP client of choice, execute the `dummy.do_something`
command on the server over and over again, as quickly as you can. The
server should slowly grow unresponsive and eventually stall completely
as the client reports that the server has timed out, usually after
5000ms for most language clients.

It would be nice if we could set the transport concurrency level to
something very, very high (like 100+, akin to the
`SETTINGS_MAX_CONCURRENT_STREAMS` setting in HTTP/2) and then have a
separate concurrency level for processing server requests specifically
(something customizable by the user and defaulting to much lower value,
like 4). I believe an approach like this would fix the issue for the
vast majority of users.

This can be achieved by raising the buffer size of the
`server_tasks_{tx,rx}` channel from 0 to 100, thereby allowing up to
100 server requests to be queued for processing at a time, but only
executed in batches of `max_concurrency`.
@ebkalderon
Copy link
Owner Author

Okay, I think this pull request is largely ready to merge! With some light integration testing out of the way and some external feedback addressed, it seems that the time has come to pull the trigger on this. It may take a bit to prepare the changelog entries for the next release, given all the changes, and it's very early in the morning where I am, so pushing the green button will have to wait another day or two. 😴

@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch from 357e062 to 29f3249 Compare March 10, 2022 08:55
@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch 2 times, most recently from 84f704d to 1af51d3 Compare March 11, 2022 01:53
The vast majority of `Method` implementers implement `Copy + Clone`,
except for "$/cancelRequest" which implements `Clone` only (due to
needing to clone `Arc<Pending>` on every call). As such, it should be
possible to place this bound on `handler: F` and avoid multiple
ownership altogether.
@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch 2 times, most recently from 871f552 to cb0c0b7 Compare March 11, 2022 01:55
In this case, we take "callback" to refer to some function or method
passed as an argument to be executed by the `Router<S>`, while "handler"
refers to the object wrapping said callback that gets executed on every
`<Router<S> as Service<Request>>::call`.
@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch 2 times, most recently from c217baf to c5c6752 Compare March 11, 2022 02:04
@ebkalderon ebkalderon force-pushed the support-custom-server-requests branch from c5c6752 to 446a0d8 Compare March 11, 2022 02:45
@ebkalderon
Copy link
Owner Author

I've chosen to rename .method() to .custom_method(), making it a bit clearer that this builder method does not redefine preexisting registered RPC methods, but can only define custom methods on top of the LSP specification-defined ones.

@ebkalderon
Copy link
Owner Author

Looks like this is ready to merge. Let's go! 🎉

@ebkalderon ebkalderon merged commit 98a6b1f into master Mar 11, 2022
@ebkalderon ebkalderon deleted the support-custom-server-requests branch March 11, 2022 03:18
@IWANABETHATGUY
Copy link
Contributor

Could you please publish a new version? Currently i used git repo as my dependency which maybe broken when you add some breaking change.

@ebkalderon
Copy link
Owner Author

Yup, I'm waiting for PR #321 to finish in CI as I type this. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Receive a custom request from client Implement Service for Client Remodel service as bidirectional stream
4 participants