Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase IO read size default #97

Merged
merged 1 commit into from
Aug 8, 2024
Merged

Increase IO read size default #97

merged 1 commit into from
Aug 8, 2024

Conversation

huntie
Copy link

@huntie huntie commented Aug 7, 2024

Summary

Increases (4x) the size limit used when making IO.read CDP requests. This can dramatically improve performance when requesting large files (e.g. bundle sourcemap) from Metro (particularly when loaded remotely) as fewer file chunks need to be sequentially requested — following facebook/react-native#45850.

Test plan

(Using Android build based on following facebook/react-native#45850.)

  • Build rn-chrome-devtools-frontend using RN sync-and-build workflow
  • Sync changes onto remote devserver
  • Start Metro on devserver
  • Connect React Native DevTools
  • Navigate to Sources panel and observe requests in Protocol Monitor
  • (Close DevTools and rebuild app between runs for clean slow start.)
Before After
https://pxl.cl/5nnsq https://pxl.cl/5nnwm
Makes 6 requests using size: 1048576 ✅ Makes 2 requests using size: 4194304, rendering source file in browser significantly faster

Before/after side-by-side 🎬: https://pxl.cl/5nnHF

Upstreaming plan

  • This commit should be sent as a patch to the upstream devtools-frontend repo. I've reviewed the contribution guide.
  • This commit is React Native-specific and cannot be upstreamed.

Note

This is absolutely upstreamable, but is an opinionated increase targeting React Native (mobile app) engineers, and our single-window use case, where it's assumed we'll have more system memory available compared to the browser use case.

Copy link

@robhogan robhogan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just curious how you settled on 4x as opposed to 2x or 10x?

(IMO, another justification for diverging from Chrome here is that we typically expect dev server <-> device to be local or at least over a fast network, whereas Chrome needs to think about fetching over slow internet.)

@robhogan
Copy link

robhogan commented Aug 7, 2024

Not this PR but just to note something closely related that we absolutely should upstream - CDT currently requires at least two IO.reads for every request, because it drops data when eof: true (so there must always one empty eof response).

It'd shouldn't be difficult to allow data and eof: true in the same response - and indeed the docs already imply that's allowed.

@huntie
Copy link
Author

huntie commented Aug 7, 2024

@robhogan Good question! It's arbitrary. My thought process was that upon hitting a real-world bottleneck, we typically double capacity to add sufficient overhead — in this case, doubling twice until I could see a sufficient end to end improvement. No idea what the safe maximum would be, but felt we could keep this relatively constrained.

@motiz88
Copy link

motiz88 commented Aug 7, 2024

@huntie, @robhogan I'll defer to you both but I have some non-blocking (heh) thoughts:

  • Why does the number of chunks have such a dramatic impact on performance? This smells like a potential issue elsewhere in our system that we should probably address (or at least explain as part of the reasoning behind a change like this).
  • It might be worthwhile to imagine an upstreamable solution that still lets Chrome keep their default for the browser case. (e.g. some hint within the protocol?) Although the current super-simple solution is great too.
  • It might be worthwhile to trace the history of this magic number upstream and/or ask the Chrome team for their thoughts about a coordinated increase.

@robhogan
Copy link

robhogan commented Aug 7, 2024

Why does the number of chunks have such a dramatic impact on performance? This smells like a potential issue elsewhere in our system that we should probably address (or at least explain as part of the reasoning behind a change like this).

AFAICT, this is mostly a CDP design issue - CDT waits until it has received and decoded an IO.read response before requesting the next chunk, so it's not a continuous stream and latency (especially with the latency of the proxy hop*) makes it quite choppy. The device-side request/response turnaround is typically relatively fast.

(This is arguably another symptom of our high bandwidth but high latency network arrangement, plus our relatively large bundles + source maps vs what's typical on web)

*Worth measuring this - parsing and reserialising in the proxy might introduce a non-trivial overhead for these multi-MB messages.

@huntie
Copy link
Author

huntie commented Aug 8, 2024

Let's start with this then keep iterating. @motiz88 Do we have any internal doc/inbox for items to discuss with the Chrome team?

@huntie huntie merged commit a93a780 into main Aug 8, 2024
3 checks passed
@huntie huntie deleted the increase-io-read-size branch August 8, 2024 09:21
huntie added a commit to huntie/react-native that referenced this pull request Aug 21, 2024
Summary:
Temporaily disable the `nativeSourceCodeFetching` capability — which reverts this to the legacy handling in the Inspector Proxy.

This is because we've noticed performance issues when loading large bundle source maps, particularly on Android, with a nontrivial path to optimising this ([raising the frontend `IO.read` size](facebookexperimental/rn-chrome-devtools-frontend#97) further is leading to WebSocket disconnections on Android 😐).

Changelog: [Internal]

Differential Revision: D61543480
facebook-github-bot pushed a commit to facebook/react-native that referenced this pull request Aug 21, 2024
…6132)

Summary:
Pull Request resolved: #46132

Temporaily disable the `nativeSourceCodeFetching` capability — which reverts this to the legacy handling in the Inspector Proxy.

This is because we've noticed performance issues when loading large bundle source maps, particularly on Android, with a nontrivial path to optimising this ([raising the frontend `IO.read` size](facebookexperimental/rn-chrome-devtools-frontend#97) further is leading to WebSocket disconnections on Android 😐).

Changelog: [Internal]

Reviewed By: robhogan

Differential Revision: D61543480

fbshipit-source-id: ee66b4cebd40f8cc6466270c5875df744d2b588a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants