Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana process shuts down over time #5764

Closed
Cyberuben opened this issue Dec 23, 2015 · 12 comments
Closed

Kibana process shuts down over time #5764

Cyberuben opened this issue Dec 23, 2015 · 12 comments

Comments

@Cyberuben
Copy link

I have noticed that if I start the Kibana service on my CentOS machine, it shuts down itself after a while. I do not have any log messages that indicate anything strange, as the /var/log/kibana/kibana.stderr file is completely empty, and the /var/log/kibana/kibana.stdout file shows "normal" logging, like requests and healthchecks.

I installed my Kibana using this guide: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7. I start Kibana using sudo service kibana start, the status is green, but after a few hours of inactivity I tried to access it again and the server states that the service is no longer running.

@rashidkpc
Copy link
Contributor

Check your /var/log/messages, it might be getting slayed by the oom_killer if running on a low memory instance. Try setting the NODE_OPTIONS flags to set a lower memory limit.

NODE_OPTIONS="--max-old-space-size=512"

@Cyberuben
Copy link
Author

There is not a single occurance of oom_killer in my /var/log/messages.

cat /var/log/messages | grep kibana outputs the following:

Dec 22 00:11:56 ooruben /etc/init.d/kibana: Attempting 'start' on kibana
Dec 22 00:11:56 ooruben /etc/init.d/kibana: kibana started
Dec 22 19:33:24 ooruben /etc/init.d/kibana: Attempting 'start' on kibana
Dec 22 19:33:24 ooruben /etc/init.d/kibana: kibana started
Dec 22 19:33:24 ooruben kibana: kibana started
Dec 23 15:07:46 ooruben /etc/init.d/kibana: Attempting 'start' on kibana
Dec 23 15:07:46 ooruben /etc/init.d/kibana: kibana started

I don't stop it anywhere, it just dies. I tried it with the NODE_OPTIONS provided by you, will report back later.

@rashidkpc
Copy link
Contributor

You may want to grep for node instead of kibana

@Cyberuben
Copy link
Author

These are the only lines that stand out:

Dec 22 16:37:21 ooruben kernel: traps: node[12977] general protection ip:98a71a sp:7fff0eafa0d0 error:0 in node[400000+fcc000]
Dec 23 13:55:03 ooruben kernel: traps: node[643] general protection ip:987eb4 sp:7fffd3952260 error:0 in node[400000+fcc000]

@rashidkpc
Copy link
Contributor

No idea at all whats going on there. What kind of box is this on? Bad ram maybe? Does this happen on a different machine?

@jchannon
Copy link

I think I have the same issue using the above linked docker image.

docker stats produces:

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O                   BLOCK I/O
elk                 0.13%               272.2 MB / 1.023 GB   26.62%              113.3 kB / 1.287 MB       24.35 GB / 81.59 MB

@ivankennethwang
Copy link

Hi,

I'm experiencing the same issue. Running on EC2 T2.Small, Amazon Linux AMI (CentOS based), followed same tutorial to setup.

ec2-user@ip-10-1-1-67:~$ sudo cat /var/log/messages | grep node
Feb 23 03:26:41 ip-10-1-1-67 kernel: [70501.741393]  [<ffffffff811640f9>] __alloc_pages_nodemask+0x8a9/0x8d0
Feb 23 03:26:41 ip-10-1-1-67 kernel: [70501.883140] [ 2169]   496  2169   579670   374092    1439    1315        0             0 node
Feb 23 03:26:41 ip-10-1-1-67 kernel: [70501.963612] Out of memory: Kill process 2169 (node) score 734 or sacrifice child
Feb 23 03:26:41 ip-10-1-1-67 kernel: [70501.967159] Killed process 2169 (node) total-vm:2318680kB, anon-rss:1496368kB, file-rss:0kB
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.472294] node invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.476696] node cpuset=/ mems_allowed=0
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.479323] CPU: 0 PID: 22279 Comm: node Tainted: G            E   4.1.17-22.30.amzn1.x86_64 #1
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.533694]  [<ffffffff811640f9>] __alloc_pages_nodemask+0x8a9/0x8d0
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.890332] [22279]   496 22279   414131   212559     804     689        0             0 node
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.936883] [12436]   500 12436   314815    13682     349      46        0             0 node
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.942163] [12437]   500 12437   226036     2088      45      14        0             0 node
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.947011] Out of memory: Kill process 22279 (node) score 417 or sacrifice child
Feb 24 14:24:00 ip-10-1-1-67 kernel: [196340.951188] Killed process 22279 (node) total-vm:1656524kB, anon-rss:850236kB, file-rss:0kB

@marcinkubica
Copy link

Just to let you know guys I'm running ELK on Centos 7 since ES1.7 release
and I've never ever had such problems.

Unlikely Kibana issue at all.

Sent from Gmail Mobile

@ivankennethwang
Copy link

I started to include NODE_OPTIONS="--max-old-space-size=512" hoping problem wont happen again.

Linux ip-X-X-X-X 4.1.17-22.30.amzn1.x86_64 #1 SMP Fri Feb 5 23:44:22 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
ES: 2.2.0
Kibana: 4.4.1

@megastef
Copy link

Problem solved?
You might want to have a look here:
#6153 (comment)
#6153 (comment) - We run with "--max-old-space-size=200" using Node 4.3.1 in production.

@ivankennethwang
Copy link

Yes problem solved. Thanks!

@rashidkpc
Copy link
Contributor

Haven't heard back from the original poster. Other posters are OOM issues, closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants