Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metricbeat PostgreSQL Dashboard creates significant load and long running queries #22085

Closed
ckauf opened this issue Oct 22, 2020 · 6 comments · Fixed by #24607
Closed

Metricbeat PostgreSQL Dashboard creates significant load and long running queries #22085

ckauf opened this issue Oct 22, 2020 · 6 comments · Fixed by #24607
Assignees
Labels
Team:Integrations Label for the Integrations team Team:Services (Deprecated) Label for the former Integrations-Services team

Comments

@ckauf
Copy link

ckauf commented Oct 22, 2020

  • Version: 7.9.1 on ECE 2.6.2
  • Operating System: CentOS 7
  • Steps to Reproduce: Open Metricbeat PostgreSQL dashboard.

Opening the Metricbeat PostgreSQL dashboard for longer time spans (e.g. 7 days) creates significant CPU usage. Specifying an additional filter (event.module: postgresql) on the dashboard level reduces the loading time from minutes to seconds. This effect is also visible after waiting some time and manually clearing the caches.

First try opening the dashboard ~13:13 without the filter and second try ~13:16 with the filter:

Hot node monitoring:
hot_node
Warm Node monitoring:
warm_node

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Oct 22, 2020
@sorantis sorantis added the Team:Services (Deprecated) Label for the former Integrations-Services team label Oct 22, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-services (Team:Services)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Oct 22, 2020
@sorantis
Copy link
Contributor

@sayden is this related to or a regression after #20321?

@jsoriano
Copy link
Member

Specifying an additional filter (event.module: postgresql) on the dashboard level reduces the loading time from minutes to seconds.

If this is helpful in this case, I think this kind of filters could be helpful in the dashboards of many other modules. We might consider adding filters like these ones to all dashboards.

With agent/fleet and data streams we could even query directly to logs-postgresql.* in postgresql dashboards. @ruflin do you know if this is something we are already doing or considering to do?

@sayden is this related to or a regression after #20321?

I don't think this is related to this issue, #20321 is about the contents of a filebeat dashboard, not its performance.

@ruflin
Copy link
Member

ruflin commented Mar 15, 2021

TBH I'm surprised we don't have this kind of filter already in the dashboards. I think in most cases we should filter by event.module and event.dataset to reduce the amount of data.

With the data stream naming scheme we currently filter on data_stream.dataset which is almost what you propose above. As it is a constant_keyword it will not open any index without relevant data. Assuming there is more then just the postgres data in the above case, this will speed up things even further.

@jsoriano
Copy link
Member

TBH I'm surprised we don't have this kind of filter already in the dashboards. I think in most cases we should filter by event.module and event.dataset to reduce the amount of data.

I have checked now and this is actually done in several dashboards, but not in many of them.

I think it may worth to add it to the ones missing it, even if it is solved in agent with the use of data streams.

@jsoriano
Copy link
Member

Adding filters in PostgreSQL dashboards in #24607.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Integrations Label for the Integrations team Team:Services (Deprecated) Label for the former Integrations-Services team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants