-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leaking _Closure
s et al when using overlapping Directory.watch
streams
#52703
Comments
The many iterations isn't that far fetched btw, in #52447 it likely has occurred several hundred times --- though in that instance it's a mac which doesn't suffer this significantly). |
And for completions sake: After 250 such iterations we're at _Closure: 8,868,011 (541.3MB) (and the heap is now 2GB+) |
The So, do we have a bug in the cancellation, or is the analyzer keeping subscriptions alive longer than necessary? |
If we keep the analyzer out of it for the moment and just look at my example code --- you tell me. |
If we simplify the example a bit and dump a heapsnapshot: import 'dart:async';
import 'dart:developer';
import 'dart:io';
void main() async {
final dir = Directory.fromUri(Uri.parse('pkg/'));
StreamSubscription? current;
for (int i = 0; i < 15; i++) {
final temp = dir.watch().listen((event) {});
await current?.cancel();
current = dir.watch().listen((event) {});
await temp.cancel();
}
NativeRuntime.writeHeapSnapshotToFile('watch.heapsnapshot');
current?.cancel();
} then examine the
That indicates that a static class _InotifyFileSystemWatcher {
static final Map<int, StreamController> _idMap = {};
} Whenever we start watching a path first time we put an entry in there, whenever the last watch of a path gets cancelled we clear it out (to share multiple watches of the same path). Looking closer how the individual _pathWatched().pipe(_broadcastController); This will unconditionally pipe all elements from the shared watch stream into the per Internally this When the A change like this seems to fix it diff --git a/sdk/lib/_internal/vm/bin/file_patch.dart b/sdk/lib/_internal/vm/bin/file_patch.dart
index d60b8f95da1..353e30430b1 100644
--- a/sdk/lib/_internal/vm/bin/file_patch.dart
+++ b/sdk/lib/_internal/vm/bin/file_patch.dart
@@ -136,6 +136,7 @@ abstract class _FileSystemWatcher {
final StreamController<FileSystemEvent> _broadcastController =
new StreamController<FileSystemEvent>.broadcast();
+ late StreamSubscription _sourceSubscription;
@patch
static Stream<FileSystemEvent> _watch(
@@ -193,7 +194,13 @@ abstract class _FileSystemWatcher {
}
_watcherPath = _idMap[pathId];
_watcherPath!.count++;
- _pathWatched().pipe(_broadcastController);
+ _sourceSubscription = _pathWatched().listen((event) {
+ _broadcastController.add(event);
+ }, onError: (error, stack) {
+ _broadcastController.addError(error, stack);
+ }, onDone: () {
+ _broadcastController.close();
+ });
}
void _cancel() {
@@ -222,6 +229,7 @@ abstract class _FileSystemWatcher {
_doneWatcher();
_id = null;
}
+ _sourceSubscription.cancel();
}
// Called when (and after) a new watcher instance is created and available. |
Martins proposed fix looks to be working to me! |
Closes #52703 TEST=runtime/tests/vm/dart/regress_52703_test.dart Cherry-pick: https://dart-review.googlesource.com/c/sdk/+/309862 Bug: #52703 Change-Id: Idd4b153a9c7b19995f6c6c8bda99b878a21425fb Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/311521 Reviewed-by: Lasse Nielsen <lrn@google.com> Reviewed-by: Jens Johansen <jensj@google.com> Reviewed-by: Martin Kustermann <kustermann@google.com> Commit-Queue: Samuel Rawlins <srawlins@google.com>
TL;DR: I can leak as much ram as I want. And the analyzer does.
Consider this example:
which could be run for instance (with the file saved as
leaking_closures
) like this:This will start the observatory.
Opening it up I can go to
allocation profile
, hit theGC
button at the top, search for_Closure
and see that there's 25 of them.I can click it, scroll down and click the
all direct instances
to get a list of those 25.Clicking that list I will execute this expression:
and copy the data from the terminal. In my case it's
I'll then go to the debugger and continue, then back to the
allocation profile
, hittingGC
again. I now have 70_Closure
objects. Again I'll click it, clickall direct instances
, go to the list and this time execute(replace the data in the hardcoded set with what you got before).
I get a list of 45 new ones. I'll pick one called
Closure (_cancel)
, click it and executeIn my case it gives
1365245867
.I'll also click
Retaining path
and I getGoing to the debugger and continuing, back to
allocation profile
, hittingGC
again. I now have 79_Closure
objects. Again I'll click it, clickall direct instances
, go to the list and execute(substitute with the identity hash code from above).
I get a list with one entry, I click that entry.
The retaining path is
I do the same thing again, there's now 91 and the retaining path for the chosen one is
and again: 103 and
and for good measure once more: 115 and
So we're leaking 12
_Closure
objects per iteration and they're seemingly leaked via the_previous
pointer on_BroadcastSubscription
.This might not sound that bad, but this - though
package:watcher
is done in Analyzer for the opened directories;package:watcher
- on Linux (seemingly only on Linux though) - then adds watchers for all subdirectories (as I understand it it has to because inotify doesn't work recursivly, whereas the windows and mac equivalents do).When editing a
pubspec.yaml
file the analyzer basically does what the above script does and will leak similarly, but now with lots more watched it will leak a lot more.When opening
pkg
and editing apubspec.yaml
file inside, waits a bit, edits the yaml file file again waits a bit (and so on and so forth) --- doing it 25 times I have 937,835_Closure
s (directly using 57.2MB of ram) --- and 937,057Context
s (28.7MB), 157,623_BroadcastSubscription
s (16.8MB) etc etc.At somewhere between 70 and 80 iterations it's at:
_Closure: 2,998,725 (183.0MB),
Context: 2,997,881 (91.5MB),
_BroadcastSubscription: 501,262 (53.5MB)
/cc @lrhn
The text was updated successfully, but these errors were encountered: