Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flow: drop unqueued logs after 5 seconds when stopping tailer #2721

Merged
merged 1 commit into from
Jan 11, 2023

Conversation

rfratto
Copy link
Member

@rfratto rfratto commented Jan 11, 2023

Fix an issue where being unable to send logs to loki.write due to the client being permanently backlogged would deadlock the Flow controller.

The loki.write client may be permanently backlogged when:

  • Limits are reached when sending logs to Loki, leading to endless request retries.
  • Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before forcibly stopping the goroutine which queues log entries. If this timeout is reached, any unqueued log entries are permanently lost, as the positions file will likely be updated past the point where the entry was read.

While losing logs is not ideal, it's unacceptable for any Flow component to be able to block the controller. This is a short-term solution to allow the Flow controller to continue working properly. A long term solution would be to use a Write-Ahead Log (WAL) for log entries. See grafana/loki#7993.

Fixes #2716.
Related to grafana/loki#2361.

Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes grafana#2716.
Related to grafana/loki#2361.
@rfratto rfratto changed the title component/common/loki: drop unqueued logs after 5 seconds on shutdown component/common/loki: drop unqueued logs after 5 seconds on tailer shutdown Jan 11, 2023
@rfratto rfratto changed the title component/common/loki: drop unqueued logs after 5 seconds on tailer shutdown component/common/loki: drop unqueued logs after 5 seconds when stopping tailer Jan 11, 2023
@rfratto rfratto changed the title component/common/loki: drop unqueued logs after 5 seconds when stopping tailer Flow: drop unqueued logs after 5 seconds when stopping tailer Jan 11, 2023
Copy link
Collaborator

@mattdurham mattdurham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic and much cleaner than I thought it would be, LGTM

@rfratto rfratto merged commit e7e88a6 into grafana:main Jan 11, 2023
rfratto added a commit to rfratto/agent that referenced this pull request Jan 11, 2023
…grafana#2721)

Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes grafana#2716.
Related to grafana/loki#2361.
@rfratto rfratto deleted the fix-loki-deadlock branch January 11, 2023 16:06
rfratto added a commit that referenced this pull request Jan 11, 2023
* prometheus.relabel: clone labels before relabeling (#2701)

This commit clones the label set before applying relabels. Not
cloning does two things:

1. It forces the computed ID of the incoming series to change (as its
   labels changed)

2. It can cause obscure bugs with relabel rules being applied, such as
   a `keep` action which doesn't work after modifying the original
   slice.

* component/common/loki: drop unqueued logs after 5 seconds on shutdown (#2721)

Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes #2716.
Related to grafana/loki#2361.

* prepare for v0.30.2 release

* address review feedback

* operator: Use enableHttp2 field as boolean in libsonnet templates (#2724)

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Co-authored-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com>
@github-actions github-actions bot added the frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed. label Mar 13, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 13, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
frozen-due-to-age Locked due to a period of inactivity. Please open new issues or PRs if more discussion is needed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug] Flow fails to display loki.source.file when port-forwarding on kubernetes
4 participants