Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

console.log + XTerm cause memory leak #4671

Closed
tomaskulich opened this issue Jan 13, 2016 · 8 comments
Closed

console.log + XTerm cause memory leak #4671

tomaskulich opened this issue Jan 13, 2016 · 8 comments
Labels
duplicate Issues and PRs that are duplicates of other issues or PRs. process Issues and PRs related to the process subsystem.

Comments

@tomaskulich
Copy link

When I run this simple script,

var i = 0
while (true) {
  console.log(i)
  i += 1
}

I got only around 3k - 5k numbers printed to my console and then node just stops in a very strange state:

  • no more numbers are printed out, but the process is running
  • node start allocating a lot of memory. After cca 1 minute memory consumption is as high as 1.5GB (but no more numbers are printed out)
  • when I run node --trace-gc I noticed there is a lot of gc activity after the numbers stopped (see below).

I'm using the newest version of XTerm on Arch (xterm-320-2).

Things that are not affecting the faulty behavior in any way:

  • I've tried this on two different Archlinux machines
  • different versions of node v0.12.9, v3.3.1, v5.0.0
  • process.stdout.write instead of console.log

What makes the problem disappear:

  • piping the output to file instead of just displaying it
  • different terminal (urxvt on Archlinux, gnome-terminal on Ubuntu)
  • running analogous python script on my machine (Arch, XTerm)

output from node --trace-gc myscript.js:
...
...
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456[4091] 213 ms: Scavenge 9.8 (43.5) -> 6.9 (43.5) MB, 1.7 ms [allocation failure].
[4091] 225 ms: Scavenge 10.4 (43.5) -> 7.6 (44.5) MB, 1.9 ms [allocation failure].
[4091] 245 ms: Scavenge 11.1 (45.5) -> 8.2 (45.5) MB, 3.5 ms [allocation failure].
[4091] 269 ms: Scavenge 15.8 (45.5) -> 9.6 (45.5) MB, 2.9 ms [allocation failure].
[4091] 309 ms: Scavenge 16.5 (45.5) -> 10.8 (46.5) MB, 4.0 ms (+ 14.7 ms in 103 steps since last GC) [allocation failure].
[4091] 317 ms: Mark-sweep 11.5 (46.5) -> 10.0 (47.5) MB, 5.0 ms (+ 16.1 ms in 113 steps since start of marking, biggest step 0.3 ms) [GC interrupt] [GC in old space requested].
[4091] 355 ms: Scavenge 18.2 (47.5) -> 11.6 (47.5) MB, 2.5 ms [allocation failure].
[4091] 379 ms: Scavenge 18.5 (48.5) -> 12.8 (49.5) MB, 3.8 ms [allocation failure].
[4091] 407 ms: Scavenge 19.9 (49.5) -> 14.1 (50.5) MB, 7.1 ms [allocation failure].
[4091] 489 ms: Scavenge 29.2 (51.5) -> 16.8 (52.5) MB, 6.3 ms [allocation failure].
[4091] 549 ms: Scavenge 30.7 (52.5) -> 19.3 (55.5) MB, 7.7 ms [allocation failure].
[4091] 607 ms: Scavenge 33.4 (55.5) -> 21.9 (57.5) MB, 7.4 ms [allocation failure].
[4091] 662 ms: Scavenge 35.9 (57.5) -> 24.4 (60.5) MB, 7.1 ms [allocation failure].
[4091] 713 ms: Scavenge 38.5 (60.5) -> 26.9 (62.5) MB, 7.1 ms [allocation failure].
[4091] 765 ms: Scavenge 41.0 (63.5) -> 29.5 (65.5) MB, 7.0 ms [allocation failure].
[4091] 815 ms: Scavenge 43.5 (66.5) -> 32.0 (67.5) MB, 6.9 ms [allocation failure].
[4091] 866 ms: Scavenge 46.1 (68.5) -> 34.6 (70.5) MB, 7.1 ms [allocation failure].
[4091] 954 ms: Scavenge 48.6 (70.5) -> 37.1 (73.5) MB, 10.3 ms (+ 29.1 ms in 208 steps since last GC) [allocation failure].
[4091] 1034 ms: Scavenge 51.2 (73.5) -> 39.7 (75.5) MB, 7.6 ms (+ 25.1 ms in 208 steps since last GC) [allocation failure].
[4091] 1050 ms: Mark-sweep 40.5 (75.5) -> 34.8 (77.5) MB, 11.8 ms (+ 56.1 ms in 429 steps since start of marking, biggest step 0.3 ms) [GC interrupt] [GC in old space requested].
[4091] 1103 ms: Scavenge 50.9 (77.5) -> 37.7 (77.5) MB, 4.6 ms [allocation failure].
[4091] 1152 ms: Scavenge 51.4 (77.5) -> 40.1 (79.5) MB, 7.1 ms [allocation failure].
[4091] 1200 ms: Scavenge 54.2 (79.5) -> 42.7 (81.5) MB, 6.5 ms [allocation failure].
[4091] 1249 ms: Scavenge 56.7 (81.5) -> 45.2 (83.5) MB, 6.7 ms [allocation failure].
[4091] 1297 ms: Scavenge 59.3 (83.5) -> 47.8 (84.5) MB, 6.4 ms [allocation failure].
...
...

@omsmith
Copy link
Contributor

omsmith commented Jan 13, 2016

(EDIT: Probably) Dupe of #1741

Also #2970, #3171

@Fishrock123 Fishrock123 added the process Issues and PRs related to the process subsystem. label Jan 13, 2016
@Fishrock123
Copy link
Contributor

@tomaskulich Mind trying the cases in #1741 and seeing if any of the above options you tried makes a difference?

Edit: doh. Apparently this case didn't fail there but does in XTerm..

@ChALkeR
Copy link
Member

ChALkeR commented Jan 13, 2016

Most probably this is a dupe of #1741, #2970, #3171, #4280.
The difference between different terminal emulators can be caused by their speed — i.e. if xterm is considerably slower, then it will print less lines before all the memory gets consumed by the queue (or require shorter/longer messages, depending on what operation is slow in your terminal emulator, generic character printing or a newline).

@ChALkeR ChALkeR added the duplicate Issues and PRs that are duplicates of other issues or PRs. label Jan 13, 2016
@tomaskulich
Copy link
Author

Oh, I see, it makes sense. It's just interesting, that in some moment XTerm (quite abruptly) stopped printing at all, I'd expect it slows down but this sudden stop is strange.

@ChALkeR
Copy link
Member

ChALkeR commented Jan 13, 2016

@tomaskulich I will try to reproduce that in several hours just to be sure.

@sam-github
Copy link
Contributor

This doesn't surprise me, you are blocking the event loop by doing an infinite js loop, wouldn't you expect node's event loop to slowly accumulate undone work if you never return to it?

@tomaskulich
Copy link
Author

@sam-github that's perfectly fine. What surprises me, is that after ~4000 lines of output, I cease to see any new lines being printed. I'd expect the terminal to continue doing its work (at least, at some speed). As you say, since it cannot keep pace with node, node will eventually run out of memory, which is understandable.

I'm not saying it is a big issue, I was worried, there is something much more serious behind this.

@cjihrig
Copy link
Contributor

cjihrig commented Jan 15, 2016

I'm going to close this as it really sounds like a duplicate of all the other "console.log() in a tight loop" issues. If it turns out to be something else, we can reopen it.

@cjihrig cjihrig closed this as completed Jan 15, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate Issues and PRs that are duplicates of other issues or PRs. process Issues and PRs related to the process subsystem.
Projects
None yet
Development

No branches or pull requests

6 participants