diff --git a/docs/sources/tempo/operations/traceql-metrics.md b/docs/sources/tempo/operations/traceql-metrics.md index 2f4f07e696d..ed6ec05342c 100644 --- a/docs/sources/tempo/operations/traceql-metrics.md +++ b/docs/sources/tempo/operations/traceql-metrics.md @@ -1,8 +1,8 @@ --- aliases: [] -title: TraceQL metrics -menuTitle: TraceQL metrics -description: Learn about using TraceQL metrics. +title: Configure TraceQL metrics +menuTitle: Configure TraceQL metrics +description: Learn about configuring TraceQL metrics. weight: 550 keywords: - Prometheus @@ -10,39 +10,17 @@ keywords: - TraceQL metrics --- -# TraceQL metrics +# Configure TraceQL metrics {{< docs/experimental product="TraceQL metrics" >}} -Tempo 2.4 introduces the addition of metrics queries to the TraceQL language as an experimental feature. +TraceQL language provides metrics queries as an experimental feature. Metric queries extend trace queries by applying a function to trace query results. This powerful feature creates metrics from traces, much in the same way that LogQL metric queries create metrics from logs. -Initially, only `count_over_time` and `rate` are supported. -For example: -``` -{ resource.service.name = "foo" && status = error } | rate() -``` - -In this case, we are calculating the rate of the erroring spans coming from the service `foo`. Rate is a `spans/sec` quantity. -Combined with the `by()` operator, this can be even more powerful! - -``` -{ resource.service.name = "foo" && status = error } | rate() by (span.http.route) -``` +For more information about available queries, refer to [TraceQL metrics queries]({{< relref "../traceql/metrics-queries" >}}). -Now, we are still rating the erroring spans in the service `foo` but the metrics have been broken -down by HTTP endpoint. This might let you determine that `/api/sad` had a higher rate of erroring -spans than `/api/happy`, for example. - -## Enable and use TraceQL metrics - -You can use the TraceQL metrics in Grafana with any existing or new Tempo data source. -This capability is available in Grafana Cloud and Grafana (10.4 and newer). - -![Metrics visualization in Grafana](/media/docs/tempo/metrics-explore-sample-2.4.png) - -### Before you begin +## Before you begin To use the metrics generated from traces, you need to: @@ -50,13 +28,14 @@ To use the metrics generated from traces, you need to: * Configure a Tempo data source configured in Grafana or Grafana Cloud * Access Grafana Cloud or Grafana 10.4 -### Configure the `local-blocks` processor +## Configure the `local-blocks` processor Once the `local-blocks` processor is enabled in your `metrics-generator` configuration, you can configure it using the following block to make sure it records all spans for TraceQL metrics. Here is an example configuration: + ```yaml metrics_generator: processor: @@ -70,7 +49,7 @@ Here is an example configuration: Refer to the [metrics-generator configuration]({{< relref "../configuration#metrics-generator" >}}) documentation for more information. -### Evaluate query timeouts +## Evaluate query timeouts Because of their expensive nature, these queries can take a long time to run in different systems. As such, consider increasing the timeouts in various places of diff --git a/docs/sources/tempo/traceql/metrics-queries.md b/docs/sources/tempo/traceql/metrics-queries.md new file mode 100644 index 00000000000..eb657bc21fe --- /dev/null +++ b/docs/sources/tempo/traceql/metrics-queries.md @@ -0,0 +1,95 @@ +--- +title: TraceQL metrics queries +menuTitle: TraceQL metrics queries +description: Learn about TraceQL metrics queries +weight: 600 +keywords: + - metrics query + - TraceQL metrics +--- + +# TraceQL metrics queries + +{{< docs/experimental product="TraceQL metrics" >}} + +TraceQL metrics is an experimental feature in Grafana Tempo that creates metrics from traces. + +Metric queries extend trace queries by applying a function to trace query results. +This powerful feature allows for adhoc aggregation of any existing TraceQL query by any dimension available in your traces, much in the same way that LogQL metric queries create metrics from logs. + +Traces are a unique observability signal that contain causal relationships between the components in your system. +Do you want to know how many database calls across all systems are downstream of your application? +What services beneath a given endpoint are currently failing? +What services beneath an endpoint are currently slow? TraceQL metrics can answer all these questions by parsing your traces in aggregate. + +![Metrics visualization in Grafana](/media/docs/tempo/metrics-explore-sample-2.4.png) + +## Enable and use TraceQL metrics + +You can use the TraceQL metrics in Grafana with any existing or new Tempo data source. +This capability is available in Grafana Cloud and Grafana (10.4 and newer). + +To enable TraceQL metrics, refer to [Configure TraceQL metrics](https://grafana.com/docs/tempo/latest/operations/traceql-metrics/) for more information. + +## Functions + +TraceQL supports include `rate`, `count_over_time`, `quantile_over_time`, and `histogram_over_time` functions. +These functions can be added as an operator at the end of any TraceQL query. + +`rate` +: calculates the number of matching spans per second + +`count_over_time` +: counts the number of matching spans per time interval (see the `step` API parameter) + +`quantile_over_time` +: the quantile of the values in the specified interval + +`histogram_over_time` +: evaluate frequency distribution over time. Example: `histogram_over_time(duration) by (span.foo)` + +## The `rate` function + +The following query shows the rate of errors by service and span name. + +``` +{ status = error } | rate() by (resource.service.name, name) +``` + +This example calculates the rate of the erroring spans coming from the service `foo`. Rate is a `spans/sec` quantity. + +``` +{ resource.service.name = "foo" && status = error } | rate() +``` + +Combined with the `by()` operator, this can be even more powerful. + +``` +{ resource.service.name = "foo" && status = error } | rate() by (span.http.route) +``` + +This example still rates the erroring spans in the service `foo` but the metrics have been broken +down by HTTP route. This might let you determine that `/api/sad` had a higher rate of erroring +spans than `/api/happy`, for example. + +### The `quantile_over_time` and `histogram_over_time` functions + +The `quantile_over_time()` and `histogram_over_time()` functions let you aggregate numerical values, such as the all important span duration. Notice that you can specify multiple quantiles in the same query. + +``` +{ name = "GET /:endpoint" } | quantile_over_time(duration, .99, .9, .5)` +``` + +You can group by any span or resource attribute. + +``` +{ name = "GET /:endpoint" } | quantile_over_time(duration, .99) by (span.http.target) +``` + +Quantiles aren't limited to span duration. +Any numerical attribute on the span is fair game. +To demonstrate this flexibility, consider this nonsensical quantile on `span.http.status_code`: + +``` +{ name = "GET /:endpoint" } | quantile_over_time(span.http.status_code, .99, .9, .5) +``` \ No newline at end of file