Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thruk 3.10 livestatus per_source_limit #1386

Open
Brickius opened this issue Aug 7, 2024 · 1 comment
Open

Thruk 3.10 livestatus per_source_limit #1386

Brickius opened this issue Aug 7, 2024 · 1 comment

Comments

@Brickius
Copy link

Brickius commented Aug 7, 2024

Describe the bug
We have two Thruk servers both running 3.10
They both connect to the same set of Nagios servers but recently we have been reciving timeouts while connecting to certain large Nagios backends
I can see the issue is livestatus running on the nagios servers hitting their connection limits:

Aug 07 08:34:54 nagserver.domain.com xinetd[2040]: FAIL: livestatus per_source_limit from=thrukserver
Aug 07 08:34:54 nagserver.domain.com xinetd[2040]: FAIL: livestatus per_source_limit from=thrukserver
Aug 07 08:34:54 nagserver.domain.com xinetd[2040]: FAIL: livestatus per_source_limit from=thrukserver
Aug 07 08:34:54 nagserver.domain.com xinetd[2040]: FAIL: livestatus per_source_limit from=thrukserver

However when we increase these limits it simply increases the number of connections until it hits them again

The livestatus socket is currently set to:

Define access restriction defaults

cps		        = 2000 3
instances	        = 500
per_source	= 500

Thruk Version
Thruk 3.10
running on RHEL 7.6 / 7.7
Running as a VM at:
16 cores
31gb of ram

This has started happening at the same time to both boxes which are in two different datacentres connecting to multiple backends in different regions - no other changes have been made to the hardware/software/network in this time

Is there a timeout limit on livestatus somewhere that we are hitting?

Is there a maximum number of connections that Thruk can handle?

The servers affected are the largest on the platform

@sni
Copy link
Owner

sni commented Aug 26, 2024

When connecting to multiple large instances, i'd recommend using LMD anyway. It makes things faster. Also it would reduce the number of connections to the remote backends drastically.
Have you tried using LMD?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants