Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cherry-pick #23505 to 7.x: [fix][metricbeat]Fix metricbeat/perfmon measurement grouping #23612

Merged
merged 1 commit into from
Jan 21, 2021

Conversation

marc-gr
Copy link
Contributor

@marc-gr marc-gr commented Jan 21, 2021

Cherry-pick of PR #23505 to 7.x branch. Original message:

What does this PR do?

Fixes measurement grouping on metricbeat windows/perfmon.

Why is it important?

Counter from different metric objects were being mixed when measurement grouping was enabled.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
    - [ ] I have made corresponding changes to the documentation
    - [ ] I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Logs

With a config like:

metricbeat.modules:
  - module: windows
    metricsets: [perfmon]
    period: 10s
    perfmon.ignore_non_existent_counters: false
    perfmon.group_measurements_by_instance: true
    perfmon.queries:

    - object: "Processor Information"
      instance: "_Total"
      counters:
      - name: "% Processor Time"

    - object: "PhysicalDisk"
      instance: "_Total"
      counters:
      - name: "Current Disk Queue Length"
      - name: "Disk Read Bytes/sec"
      - name: "Disk Write Bytes/sec"

output.console:
  pretty: true

We were getting an output like:

{
  "@timestamp": "2021-01-13T15:27:09.161Z",
  "@metadata": {
    "beat": "metricbeat",
    "type": "_doc",
    "version": "8.0.0"
  },
  "event": {
    "dataset": "windows.perfmon",
    "module": "windows",
    "duration": 8027600
  },
  "metricset": {
    "name": "perfmon",
    "period": 10000
  },
  "service": {
    "type": "windows"
  },
  "windows": {
    "perfmon": {
      "metrics": {
        "disk_write_bytes_per_sec": 8381.215716061084,
        "%_processor_time": 0.10681524439283274,
        "current_disk_queue_length": 0,
        "disk_read_bytes_per_sec": 0
      },
      "object": "PhysicalDisk",
      "instance": "_Total"
    }
  },
  "host": {
    "name": "vagrant"
  },
  "agent": {
    "type": "metricbeat",
    "version": "8.0.0",
    "ephemeral_id": "e3c6902f-c539-4517-a0be-74d700709309",
    "id": "5159c10f-36f4-4489-9a9b-3a82cbd924d7",
    "name": "vagrant"
  },
  "ecs": {
    "version": "1.7.0"
  }
}

Note %_processor_time being merged with PhysicalDisk object metrics.

After the fix we get the following output instead:

{
  "@timestamp": "2021-01-14T10:56:06.684Z",
  "@metadata": {
    "beat": "metricbeat",
    "type": "_doc",
    "version": "8.0.0"
  },
  "event": {
    "duration": 5860100,
    "dataset": "windows.perfmon",
    "module": "windows"
  },
  "metricset": {
    "name": "perfmon",
    "period": 10000
  },
  "service": {
    "type": "windows"
  },
  "windows": {
    "perfmon": {
      "instance": "_Total",
      "metrics": {
        "current_disk_queue_length": 0,
        "disk_read_bytes_per_sec": 0,
        "disk_write_bytes_per_sec": 15860.12885215545
      },
      "object": "PhysicalDisk"
    }
  },
  "ecs": {
    "version": "1.7.0"
  },
  "host": {
    "name": "vagrant"
  },
  "agent": {
    "version": "8.0.0",
    "ephemeral_id": "4a737200-e1be-4d13-9eb5-cb81249f4a7f",
    "id": "5159c10f-36f4-4489-9a9b-3a82cbd924d7",
    "name": "vagrant",
    "type": "metricbeat"
  }
}
{
  "@timestamp": "2021-01-14T10:56:06.684Z",
  "@metadata": {
    "beat": "metricbeat",
    "type": "_doc",
    "version": "8.0.0"
  },
  "metricset": {
    "period": 10000,
    "name": "perfmon"
  },
  "ecs": {
    "version": "1.7.0"
  },
  "host": {
    "name": "vagrant"
  },
  "agent": {
    "version": "8.0.0",
    "ephemeral_id": "4a737200-e1be-4d13-9eb5-cb81249f4a7f",
    "id": "5159c10f-36f4-4489-9a9b-3a82cbd924d7",
    "name": "vagrant",
    "type": "metricbeat"
  },
  "service": {
    "type": "windows"
  },
  "windows": {
    "perfmon": {
      "instance": "_Total",
      "metrics": {
        "%_processor_time": 0.1124404001576429
      },
      "object": "Processor Information"
    }
  },
  "event": {
    "dataset": "windows.perfmon",
    "module": "windows",
    "duration": 5860100
  }
}

Now we get a different event for each object.

Fixes #23489

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Jan 21, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@elasticmachine
Copy link
Collaborator

Pinging @elastic/security-external-integrations (Team:Security-External Integrations)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Jan 21, 2021
@botelastic
Copy link

botelastic bot commented Jan 21, 2021

This pull request doesn't have a Team:<team> label.

@elasticmachine
Copy link
Collaborator

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: Pull request #23612 opened

    • Start Time: 2021-01-21T11:45:15.912+0000
  • Duration: 27 min 8 sec

  • Commit: f6599a4

Test stats 🧪

Test Results
Failed 0
Passed 1992
Skipped 470
Total 2462

💚 Flaky test report

Tests succeeded.

Expand to view the summary

Test stats 🧪

Test Results
Failed 0
Passed 1992
Skipped 470
Total 2462

@marc-gr marc-gr merged commit 5623b1e into elastic:7.x Jan 21, 2021
@marc-gr marc-gr deleted the backport_23505_7.x branch January 21, 2021 13:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants