Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metric Filter using include/exclude syntax not filtering properly using metric labels/attributes #34072

Open
maximillianus opened this issue Jul 15, 2024 · 2 comments
Labels
bug Something isn't working needs triage New item requiring triage processor/filter Filter processor Stale

Comments

@maximillianus
Copy link

maximillianus commented Jul 15, 2024

Component(s)

No response

What happened?

Description

Metric Filter using include/exclude syntax not filtering properly using metric labels/attributes

Steps to Reproduce

My config.yml

receivers:
  hostmetrics:
    collection_interval: 10s
    scrapers:
      cpu:
        metrics:
          system.cpu.time:
            enabled: true

processors:
  filter/metric_attr:
    error_mode: ignore
    metrics:
      include:
        match_type: expr
        expressions:
          - Label("cpu") == "^cpu0$"

exporters:
  prometheus:
    endpoint: localhost:8889
    namespace: otel-host-metrics

service:
  telemetry:
    logs:
      level: debug
  pipelines:
    metrics:
      receivers: [hostmetrics]
      processors: [filter/metric_attr]
      exporters: [prometheus]

Expected Result

# HELP otel_host_metrics_system_cpu_time_seconds_total Total seconds each logical CPU spent on each mode.
# TYPE otel_host_metrics_system_cpu_time_seconds_total counter
otel_host_metrics_system_cpu_time_seconds_total{cpu="cpu0",state="idle"} 135036.14
otel_host_metrics_system_cpu_time_seconds_total{cpu="cpu0",state="interrupt"} 0
.
.
.

Actual Result

# HELP otel_host_metrics_system_cpu_time_seconds_total Total seconds each logical CPU spent on each mode.
# TYPE otel_host_metrics_system_cpu_time_seconds_total counter
otel_host_metrics_system_cpu_time_seconds_total{cpu="cpu0",state="idle"} 135036.14
otel_host_metrics_system_cpu_time_seconds_total{cpu="cpu0",state="interrupt"} 0
.
.
.
otel_host_metrics_system_cpu_time_seconds_total{cpu="cpu1",state="idle"} 135040.18
otel_host_metrics_system_cpu_time_seconds_total{cpu="cpu1",state="interrupt"} 0
.
.
.

Collector version

0.104.0

Environment information

Environment

OS: Amazon Linux 2023
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

receivers:
  hostmetrics:
    collection_interval: 10s
    scrapers:
      cpu:
        metrics:
          system.cpu.time:
            enabled: true

processors:
  filter/metric_attr:
    error_mode: ignore
    metrics:
      include:
        match_type: expr
        expressions:
          - Label("cpu") matches "^cpu0$"

exporters:
  prometheus:
    endpoint: localhost:8889
    namespace: otel-host-metrics

service:
  telemetry:
    logs:
      level: debug
  pipelines:
    metrics:
      receivers: [hostmetrics]
      processors: [filter/metric_attr]
      exporters: [prometheus]

Log output

debug        processor@v0.104.0/processor.go:306        Alpha component. May change in the future.        {"kind": "processor", "name": "filter/metric_attr", "pipeline": "metrics"}

info        filterprocessor@v0.104.0/metrics.go:98        Metric filter configured        {"kind": "processor", "name": "filter/metric_attr", "pipeline": "metrics", "include match_type": "expr", "include expressions": [Label(\"cpu\") == \"cpu0\""], "include metric names": [], "include metrics with resource attributes": null, "exclude match_type": "", "exclude expressions": [], "exclude metric names": [], "exclude metrics with resource attributes": null}

Additional context

No response

@maximillianus maximillianus added bug Something isn't working needs triage New item requiring triage labels Jul 15, 2024
@crobert-1 crobert-1 added the processor/filter Filter processor label Jul 15, 2024
Copy link
Contributor

Pinging code owners for processor/filter: @TylerHelmuth @boostchicken. See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage New item requiring triage processor/filter Filter processor Stale
Projects
None yet
Development

No branches or pull requests

2 participants