Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JSON-RPC: performance problem with chainHead_v1_storage queries using descendantValues #5589

Open
josepot opened this issue Sep 4, 2024 · 3 comments · May be fixed by #5741
Open

JSON-RPC: performance problem with chainHead_v1_storage queries using descendantValues #5589

josepot opened this issue Sep 4, 2024 · 3 comments · May be fixed by #5741
Assignees
Labels
I3-annoyance The node behaves within expectations, however this “expected behaviour” itself is at issue. T3-RPC_API This PR/Issue is related to RPC APIs.

Comments

@josepot
Copy link

josepot commented Sep 4, 2024

We’ve encountered a performance issue when executing chainHead_v1_storage queries with the descendantValues option in the new JSON-RPC API.

When performing such a query, the RPC node sends an operationStorageItems notification containing only 5 items. This is immediately followed by a waitingForContinue notification. Upon receiving this, we immediately respond with a chainHead_v1_continue request, and this cycle repeats.

This results in certain queries taking an excessively long time to resolve. For example, requesting the descendant values of NominationPools.PoolMembers on the Polkadot relay-chain can take 6 to 10 minutes to complete using the new JSON-RPC API, while the same request takes only a few seconds with the legacy RPC API.

Expected Behavior

The node should efficiently return a larger number of items per operationStorageItems notification, especially when there is no sign of back-pressure (e.g., if the chainHead_v1_continue response is received promptly). Ideally, the node could send hundreds of items at once, and dynamically adjust the number of items sent based on the system's responsiveness.

Current Behavior

The node currently sends only 5 items per operationStorageItems notification (and always requesting a waitingForContinue notification), significantly slowing down the resolution of large storage queries.

Proposed Solution

  • Increase the number of items sent in each operationStorageItems notification, potentially sending several hundred at a time.
  • Adapt the number of items sent based on the system’s responsiveness (e.g., if the chainHead_v1_continue response is received quickly, send more items in the next notification).

Logs

slowOperationStorageItems.log

@github-actions github-actions bot added the I10-unconfirmed Issue might be valid, but it's not yet known. label Sep 4, 2024
@jsdw
Copy link
Contributor

jsdw commented Sep 4, 2024

Just to copy in my thought on approach too:

Given that we have backpressure we could also just never emit the waitingForContinue; the node could internally put storage messages into a queue, draining it as fast as the client can accept messages from it, and if the queue fills then the node won't internally try to fetch more storage values until it has spaces again. This would hopefully allow it to be pretty quick!

@josepot
Copy link
Author

josepot commented Sep 4, 2024

Just to copy in my thought on approach too:

Given that we have backpressure we could also just never emit the waitingForContinue; the node could internally put storage messages into a queue, draining it as fast as the client can accept messages from it, and if the queue fills then the node won't internally try to fetch more storage values until it has spaces again. This would hopefully allow it to be pretty quick!

works for me!!

@niklasad1 niklasad1 added I3-annoyance The node behaves within expectations, however this “expected behaviour” itself is at issue. T3-RPC_API This PR/Issue is related to RPC APIs. and removed I10-unconfirmed Issue might be valid, but it's not yet known. labels Sep 4, 2024
@niklasad1 niklasad1 self-assigned this Sep 4, 2024
@niklasad1 niklasad1 linked a pull request Sep 17, 2024 that will close this issue
@Polkadot-Forum
Copy link

This issue has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/polkadot-api-updates-thread/7685/15

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I3-annoyance The node behaves within expectations, however this “expected behaviour” itself is at issue. T3-RPC_API This PR/Issue is related to RPC APIs.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants