Skip to content

Commit

Permalink
[release-v2.5] [DOC] Clarify local-block config (#3821)
Browse files Browse the repository at this point in the history
(cherry picked from commit b3f06d4)
  • Loading branch information
knylander-grafana committed Jun 28, 2024
1 parent ec72f70 commit 0fa19b7
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 12 deletions.
2 changes: 1 addition & 1 deletion docs/sources/tempo/api_docs/metrics-summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This API returns RED metrics (span count, erroring span count, and latency infor
## Configuration

To enable the experimental metrics summary API, you must turn on the local blocks processor in the metrics generator.
Be aware that the generator will use considerably more resources, including disk space, if it is enabled:
Be aware that the generator uses considerably more resources, including disk space, if it's enabled:

```yaml
overrides:
Expand Down
18 changes: 14 additions & 4 deletions docs/sources/tempo/configuration/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -433,6 +433,11 @@ query_frontend:
# (default: 2)
[max_retries: <int>]

# The number of goroutines dedicated to consuming, unmarshalling and recombining responses per request. This
# same parameter is used for all endpoints.
# (default: 10)
[response_consumers: <int>]

# Maximum number of outstanding requests per tenant per frontend; requests beyond this error with HTTP 429.
# (default: 2000)
[max_outstanding_per_tenant: <int>]
Expand Down Expand Up @@ -709,10 +714,10 @@ For more information on configuration options, refer to [this file](https://gith
### Local storage recommendations
While you can use local storage, object storage is recommended for production workloads.
A local backend will not correctly retrieve traces with a distributed deployment unless all components have access to the same disk.
A local backend won't correctly retrieve traces with a distributed deployment unless all components have access to the same disk.
Tempo is designed for object storage more than local storage.
At Grafana Labs, we have run Tempo with SSDs when using local storage.
At Grafana Labs, we've run Tempo with SSDs when using local storage.
Hard drives haven't been tested.
You can estimate how much storage space you need by considering the ingested bytes and retention.
Expand Down Expand Up @@ -876,6 +881,11 @@ storage:
# enable to use path-style requests.
[forcepathstyle: <bool>]

# Optional.
# Enable to use dualstack endpoint for DNS resolution.
# Check out the (S3 documentation on dualstack endpoints)[https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html]
[enable_dual_stack: <bool>]

# Optional. Default is 0
# Example: "bucket_lookup_type: 0"
# options: 0: BucketLookupAuto, 1: BucketLookupDNS, 2: BucketLookupPath
Expand Down Expand Up @@ -1246,7 +1256,7 @@ See below for how to override these limits globally or per tenant.

#### Standard overrides

You can create an `overrides` section to configure new ingestion limits that applies to all tenants of the cluster.
You can create an `overrides` section to configure ingestion limits that apply to all tenants of the cluster.
A snippet of a `config.yaml` file showing how the overrides section is [here](https://github.com/grafana/tempo/blob/a000a0d461221f439f585e7ed55575e7f51a0acd/integration/bench/config.yaml#L39-L40).

```yaml
Expand Down Expand Up @@ -1540,7 +1550,7 @@ overrides:

These tenant-specific overrides are stored in an object store and can be modified using API requests.
User-configurable overrides have priority over runtime overrides.
See [user-configurable overrides]{{< relref "../operations/user-configurable-overrides" >}} for more details.
Refer to [user-configurable overrides]{{< relref "../operations/user-configurable-overrides" >}} for more details.

#### Override strategies

Expand Down
27 changes: 20 additions & 7 deletions docs/sources/tempo/operations/traceql-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,26 @@ For more information about available queries, refer to [TraceQL metrics queries]
To use the metrics generated from traces, you need to:

* Set the `local-blocks` processor to active in your `metrics-generator` configuration
* Configure a Tempo data source configured in Grafana or Grafana Cloud
* Access Grafana Cloud or Grafana 10.4
* Configure a Tempo data source in Grafana or Grafana Cloud
* Access Grafana Cloud or Grafana version 10.4 or newer

## Configure the `local-blocks` processor
## Activate and configure the `local-blocks` processor

Once the `local-blocks` processor is enabled in your `metrics-generator`
configuration, you can configure it using the following block to make sure
it records all spans for TraceQL metrics.
To activate the `local-blocks` processor for all users, add it to the list of processors in the `overrides` block of your Tempo configuration.

```yaml
# Global overrides configuration.
overrides:
metrics_generator_processors: ['local-blocks']
```
To configure the processor per tenant, use the `metrics_generator.processor` override.

For more information about overrides, refer to [Standard overrides]({{< relref "../configuration#standard-overrides" >}}).

### Configure the processor

Next, configure the `local-blocks` processor to record all spans for TraceQL metrics.
Here is an example configuration:

```yaml
Expand All @@ -47,6 +58,9 @@ Here is an example configuration:
path: /var/tempo/generator/traces
```

If you configured Tempo using the `tempo-distributed` Helm chart, you can also set `traces_storage` using your `values.yaml` file. Refer to the [Helm chart for an example](https://github.com/grafana/helm-charts/blob/559ecf4a9c9eefac4521454e7a8066778e4eeff7/charts/tempo-distributed/values.yaml#L362).


Refer to the [metrics-generator configuration]({{< relref "../configuration#metrics-generator" >}}) documentation for more information.

## Evaluate query timeouts
Expand Down Expand Up @@ -76,7 +90,6 @@ This is different to the default TraceQL maximum time range of 168 hours (7 days

{{< /admonition >}}


For example, in a cloud environment, smaller jobs with more concurrency may be
desired due to the nature of scale on the backend.

Expand Down

0 comments on commit 0fa19b7

Please sign in to comment.