Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

if loki is not reachable and loki-docker-driver is activated, containers apps stops and cannot be stopped/killed #2361

Open
badsmoke opened this issue Jul 16, 2020 · 76 comments · Fixed by #2898
Assignees
Labels
keepalive An issue or PR that will be kept alive and never marked as stale.
Milestone

Comments

@badsmoke
Copy link

Describe the bug
we have installed the loki-docker-driver on all our devices.
The loki server on an extra server, if the loki-server is updated/restarted or just not reachable then after a short time all containers get stuck (docker logs does not update anymore).
If the loki-server is not reachable, the containers can neither be stopped/kill nor restarted.

To Reproduce
Steps to reproduce the behavior:

  1. start loki server (server)
  2. install loki-docker-driver on another system (can also be tested on one and the same system) (client)
    2.1. /etc/docker/daemon.json { "live-restore": true, "log-driver": "loki", "log-opts": { "loki-url": "http://loki:3100/api/prom/push", "mode": "non-blocking", "loki-batch-size": "400", "max-size": "1g" } }
  3. docker run --rm --name der-container -d debian /bin/sh -c "while true; do date >> /tmp/ts ; seq 0 1000000; sleep 1 ; done"(client)
  4. docker exec -it der-container tail -f /tmp/ts
    shows every second the time (client)
  5. docker logs -f der-container show numbers from 0-1000000 (client)
  6. stop loki server (server)
  7. you will see that the outputs on the system stop with the loci-driver and that you cannot stop the container (client)
  8. docker stop der-container (client)

Expected behavior
A clear and concise description of what you expected to happen.
I would like all containers to continue to run as desired even if the loci is not accessible.
That man container can start/stop even if loki is not reachable

Environment:

  • Infrastructure: [bare-metal, laptop, VMs]
  • Deployment tool: [docker-compose]

Screenshots, Promtail config, or terminal output
loki-docker-driver version: loki-docker-driver:master-616771a (from then on the driver option "non-blocking" is supported)
loki server: 1.5.0

I am very grateful for any help, this problem has caused our whole system to collapse

@stale
Copy link

stale bot commented Aug 15, 2020

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Aug 15, 2020
@stale stale bot closed this as completed Aug 22, 2020
@badsmoke
Copy link
Author

:-(

@rkno82
Copy link

rkno82 commented Sep 8, 2020

This issue is being closed without any comment/ feedback?

For me/ us this is a major issue/ blocker.

@owen-d Can you please comment? Thank you!

@slim-bean slim-bean reopened this Sep 8, 2020
@stale stale bot removed the stale A stale issue or PR that will automatically be closed. label Sep 8, 2020
@slim-bean slim-bean added the keepalive An issue or PR that will be kept alive and never marked as stale. label Sep 8, 2020
@ondrejmo
Copy link

#2017 fixed the same problem for me

@rndmh3ro
Copy link

#2017 fixed the same problem for me

Do you mean setting the non-blocking mode?
The OP stated that they set the mode to non-blocking but it still does not work. I'll have to try it tomorrow.

@rndmh3ro
Copy link

rndmh3ro commented Sep 22, 2020

I could reproduce the problem:

root@loki # docker run -d --log-driver=loki     --log-opt loki-url="http://172.29.95.195:3101/loki/api/v1/push"     --log-opt loki-retries=5     --log-opt loki-batch-size=400 --log-opt mode=non-blocking  --name der-container -d debian /bin/sh -c "while true; do date >> /tmp/ts ; seq 0 1000000; sleep 1 ; done"```

Running loki and the above client-container, then stopping loki, the client-container fails:

error from daemon in stream: Error grabbing logs: error decoding log message: net/http: request canceled (Client.Timeout exceeded while reading body)```

@ondrejmo
Copy link

#2017 fixed the same problem for me

Do you mean setting the non-blocking mode?
The OP stated that they set the mode to non-blocking but it still does not work. I'll have to try it tomorrow.

Yeah I meant the non-blocking mode, I haven't noticed it in the original issue, sorry.

@rkno82
Copy link

rkno82 commented Oct 14, 2020

No response? 😢

@kavirajk kavirajk self-assigned this Oct 14, 2020
@Pandry
Copy link

Pandry commented Oct 27, 2020

Hi,
We are testing Loki for our architecture, and I encountered this issue too

I found out that the time needed to stop a container (any container) has "penalty" between 5 and 15 minutes when loki is the logging driver and the destination server (either loki or promtail) is unreachable.
In our testing architecture, we have the docker log driver that pushes the logs to the promtail container, and promtail that pushes the logs to the loki server (I tought (promtail cached and so) it could have been a good idea)

+-----------------------+   +--------------------+
|    Virtual Machine 01 |   | Virtual Machine 02 |
|                       |   |                    |
|   +------+--------+   |   |                    |
|   |Loki  | Docker |   |   |                    |
|   |DRIVER|        |   |   |                    |
|   +-+---++        |   |   |                    |
|   | ^   |         |   |   | +--------+         |
|   | | +-v------+  |   |   | | Loki   |         |
|   | | |Promtail+----------->+ Server |         |
|   | | +--------+  |   |   | |        |         |
|   | |             |   |   | +--------+         |
|   | +-------+     |   |   |                    |
|   | | NGINX |     |   |   |                    |
|   | +-------+     |   |   |                    |
|   +---------------+   |   |                    |
|                       |   |                    |
+-----------------------+   +--------------------+

At the moment we are trying with the mode: non-blocking mode, and, other than slowing down the stop of the promtail container itself, it seems to be ok with the other containers but it's not working anyway.

Is there any viable fix available at the moment?

@kavirajk
Copy link
Contributor

I'm investigating!

you can even reproduce by directly start any container with loki logger and some unreachable loki-url,

  1. with local log driver
docker run --log-driver local --log-opt max-size=10m alpine ping 127.0.0.1
  1. with loki log driver
docker run --log-driver loki --log-opt loki-url="http://172.17.0.1:3100/loki/api/v1/push" alpine ping 127.0.0.1

case 1, you can stop/kill container
case 2, you can stop/kill container only after 5 mins or so

docker daemon log is not that useful either.

level=warn ts=2020-10-28T11:55:05.178484441Z caller=client.go:288 container_id=eb8c67b975f20837210c638d5f83fa1fa011c183c725af337c1fad9ffb2d3a01 component=client host=172.17.0.1:3100 msg="error sending batch, will retry" status=-1 error="Post \"http://172.17.0.1:3100/loki/api/v1/push\": dial tcp 172.17.0.1:3100: connect: connection refused"

@Pandry
Copy link

Pandry commented Oct 28, 2020

I probably figured out the reason why it takes so much time, and I can say my suspect was true and I think this is probably an intended behavior:
As we can read from the source code, the message is given inside the backoff logic loop.

If we try to start a container reducing to the (almost) minimum the backoff options, we can see the container stops (almost) immediately:
docker run --log-driver loki --log-opt loki-url="http://0.0.0.0:3100/loki/api/v1/push" --log-opt loki-time out=1s --log-opt loki-max-backoff=800ms --log-opt loki-retries=2 alpine ping 127.0.0.1
(If you want to keep the log file after the container stopped, add the --log-opt keep-file=true parameter)

As far as my undestanding goes tho, if the driver is unable to send the logs withing the backoff frame, the logs will be lost (so I would consider the keep-file seriously...)

In my opinion the best thing to do would be to cache locally the logs if the client is unable to send the logs within the bakeoff window, to send them later on

@kavirajk
Copy link
Contributor

Agree with backoff logic,

Tested with fluentd log driver, looks like same there as well, except may be fluentd have some default lower backoff time (so that container stops more quickly). And I see this on daemon log

dockerd[1476]: time="2020-10-28T17:50:12.580014937+01:00" level=warning msg="Logger didn't exit in time: logs may be truncated"

also another small improvement could be to add a check to see if loki-url i reachable during start of the container and fail immediately.

@kavirajk
Copy link
Contributor

also 5mins time limit is from the default max-backoff we use. https://github.com/grafana/loki/blob/master/pkg/promtail/client/config.go#L19

@Pandry
Copy link

Pandry commented Oct 28, 2020

also another small improvement could be to add a check to see if loki-url i reachable during start of the container and fail immediately.

I disagree, as starting a service may be more important than having its log (and debugging may not be that easy)
I would rather use a feature-flag and by default keep it disabled

As I said, in my opinion the best opinion would be to cache the logs and send them as soon as a Loki endpoint becomes available; In the meantime find a way to warn the user about the unreachable endpoint and cache the logs.

@lux4rd0
Copy link

lux4rd0 commented Nov 20, 2020

Agree that a better way of understanding how to maintain control over a docker container when the end-point is unavailable is critical. I've been experimenting with different architecture deployments of Loki and found that even a Kill of the docker container doesn't work. Not being able to control a shutdown/restart of a container because I can't send logs out of the Loki driver shouldn't impact my container. Will look to change my container properties defaults to get around this.

@rkno82
Copy link

rkno82 commented Nov 20, 2020

Maybe we should accept the behaviour of the docker driver plugin and send the logfiles to a local "kind of daemonset" promtail, which supports the loki push api?

https://grafana.com/docs/loki/latest/clients/promtail/#loki-push-api

@slim-bean slim-bean self-assigned this Nov 24, 2020
rfratto added a commit to rfratto/agent that referenced this issue Jan 11, 2023
Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes grafana#2716.
Related to grafana/loki#2361.
rfratto added a commit to grafana/agent that referenced this issue Jan 11, 2023
…#2721)

Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes #2716.
Related to grafana/loki#2361.
rfratto added a commit to rfratto/agent that referenced this issue Jan 11, 2023
…grafana#2721)

Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes grafana#2716.
Related to grafana/loki#2361.
rfratto added a commit to grafana/agent that referenced this issue Jan 11, 2023
* prometheus.relabel: clone labels before relabeling (#2701)

This commit clones the label set before applying relabels. Not
cloning does two things:

1. It forces the computed ID of the incoming series to change (as its
   labels changed)

2. It can cause obscure bugs with relabel rules being applied, such as
   a `keep` action which doesn't work after modifying the original
   slice.

* component/common/loki: drop unqueued logs after 5 seconds on shutdown (#2721)

Fix an issue where being unable to send logs to `loki.write` due to the
client being permanently backlogged would deadlock the Flow controller.

The `loki.write` client may be permanently backlogged when:

* Limits are reached when sending logs to Loki, leading to endless
  request retries.
* Loki has an extended outage.

When an EntryHandler is stopped, it will wait for 5 seconds before
forcibly stopping the goroutine which queues log entries. If this
timeout is reached, any unqueued log entries are permanently lost, as
the positions file will likely be updated past the point where the entry
was read.

While losing logs is not ideal, it's unacceptable for any Flow component
to be able to block the controller. This is a short-term solution to
allow the Flow controller to continue working properly. A long term
solution would be to use a Write-Ahead Log (WAL) for log entries. See
grafana/loki#7993.

Fixes #2716.
Related to grafana/loki#2361.

* prepare for v0.30.2 release

* address review feedback

* operator: Use enableHttp2 field as boolean in libsonnet templates (#2724)

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>

Signed-off-by: Paschalis Tsilias <paschalis.tsilias@grafana.com>
Co-authored-by: Paschalis Tsilias <tpaschalis@users.noreply.github.com>
@margorczynski
Copy link

margorczynski commented Mar 1, 2023

Hey guys, any progress on this one? I still see this happening.

@daramir
Copy link

daramir commented Mar 6, 2023

The Loki Docker logging driver will not be deprecated. However, once the Docker service discovery stabilizes my personal recommendation is to use that one.

Hi @jeschkies . Do you know if it's possible to use Promtail with Docker target + sd easily on Docker Desktop (macos) which creates a vm and doesn't store log files? I'm looking for a solution that works locally and in the server. Couldn't get Promtail to discover my container logs and the docker driver is obviously broken as per #2361 issue. TIA.

@jeschkies
Copy link
Contributor

@margorczynski @thisisjaid @Edbtvplays and @feld please see my comment from August 2021. The issue is not that we don't want to fix it. The issue is that we have to decided to retry sending and thus lock the daemon or loose data. This is also documented as a known issue. If you have an idea on how to fix it, I'm all ears.

@daramir unfortunately I don't have a Mac at hand. However, as long as you can expose the Docker Daemon API to promtail it should work. However, if you kill the VM and thus erase the logs before they've been shipped, there's little promtail can do.

@MaxZubrytskyi
Copy link

Hi everyone, does somebody has a working fork that has changes allowing to lose data if such occurs?
Also, @jeschkies how about adding "log-opts" to lose data if loki is unavailable?

@horvie
Copy link

horvie commented May 19, 2023

Hi, you don't need a fork.
For containers where we can afford to lose logs we have added configuration as described in #2361 (comment) and containers are stopped without a problem.

@danthegoodman1
Copy link

danthegoodman1 commented Jul 3, 2023

Pretty sad this will block a rm --force too for the default loki-max-backoff of 5 minutes. Just drop that value down is my guess but I already switched over to running vector and mounting the docker logs directory to it because I don't trust this anymore. Vector wont block the docker daemon.

https://grafana.com/docs/loki/latest/clients/docker-driver/configuration/

@jeschkies
Copy link
Contributor

jeschkies commented Jul 10, 2023

@danthegoodman1

mounting the docker logs directory to it because I don't trust this anymore. Vector wont block the docker daemon.

That's what the file based discovery already does. The logging driver is really for local use cases and the Docker service discovery when you don't have the permissions to Mount the logging directory.

MasslessParticle pushed a commit that referenced this issue Jul 10, 2023
**What this PR does / why we need it**:
This pulls @Pandry's
[workaround](#2361 (comment))
for the seemingly deadlocked Docker daemon into the documentation.

**Special notes for your reviewer**:

**Checklist**
- [ ] Reviewed the
[`CONTRIBUTING.md`](https://github.com/grafana/loki/blob/main/CONTRIBUTING.md)
guide (**required**)
- [x] Documentation added
- [ ] Tests updated
- [ ] `CHANGELOG.md` updated
- [ ] If the change is worth mentioning in the release notes, add
`add-to-release-notes` label
- [ ] Changes that require user attention or interaction to upgrade are
documented in `docs/sources/upgrading/_index.md`
- [ ] For Helm chart changes bump the Helm chart version in
`production/helm/loki/Chart.yaml` and update
`production/helm/loki/CHANGELOG.md` and
`production/helm/loki/README.md`. [Example
PR](d10549e)
@andoks
Copy link

andoks commented Jul 10, 2023

@jeschkies

mounting the docker logs directory to it because I don't trust this anymore. Vector wont block the docker daemon.

That's what the file besser discovery already does. The logging driver is really for local use cases and the Docker service discovery when you don't have the permissions to Mount the logging directory.

What do you mean by "That's what the file besser discovery already does"? Is there a better way of sending the logs to loki than using the docker-driver that does not risk blocking the way the docker-driver does?

@jeschkies
Copy link
Contributor

@andoks yes. Yes, there's the service discovery or you could use file discovery or use jorunald.

@pharapeti
Copy link

@jeschkies @btaani

From reading through the docs and this issue, I can see there are three main solutions:

  1. Use Docker loki plugin with workaround to reduce max backoff/retries/timeout
  2. Use Promtail Docker target (not sure which Docker logging driver should be used in this case)
  3. Configure Docker daemon to use json-file or journald logging driver + Docker service discovery

Which is the officially recommended solution to use for new projects?

angristan added a commit to angristan/jaeger that referenced this issue Dec 1, 2023
* Update Grafana/Loki/Prom pinned versions (especially to get Grafana UI improvements)
* Pin Jaeger/hotrod tags to prevent future issues
* Fix traces endpoint config for hotrod (traces export: Post "http://localhost:4318/v1/traces":
  dial tcp 127.0.0.1:4318: connect: connection refused)
* Fix hotrod metrics scraping (endpoint has moved to the frontend service)
* Fix Grafana dashboard (metric names, labels, migrate to new time series panel)
* Add default Grafana credentials to README
* Fix the loki container being stuck on shutdown by setting shorter
  timeouts (bug with the driver: grafana/loki#2361 (comment))
angristan added a commit to angristan/jaeger that referenced this issue Dec 1, 2023
* Update Grafana/Loki/Prom pinned versions (especially to get Grafana UI improvements)
* Pin Jaeger/hotrod tags to prevent future issues
* Fix traces endpoint config for hotrod (traces export: Post "http://localhost:4318/v1/traces":
  dial tcp 127.0.0.1:4318: connect: connection refused)
* Fix hotrod metrics scraping (endpoint has moved to the frontend service)
* Fix Grafana dashboard (metric names, labels, migrate to new time series panel)
* Add default Grafana credentials to README
* Fix the loki container being stuck on shutdown by setting shorter
  timeouts (bug with the driver: grafana/loki#2361 (comment))

Signed-off-by: Stanislas Lange <git@slange.me>
yurishkuro pushed a commit to jaegertracing/jaeger that referenced this issue Dec 1, 2023
## Which problem is this PR solving?

Currently, the `grafana-integration` example doesn't work properly: if
you run `docker-compose up` in that folder, services will start but only
logging will work, the metrics and tracing won't.

## Description of the changes

* Fix traces endpoint config for hotrod (`traces export: Post
"http://localhost:4318/v1/traces": dial tcp 127.0.0.1:4318: connect:
connection refused`)
* Fix hotrod metrics scraping (endpoint has moved to the frontend
service)
* Pin Jaeger/hotrod tags to prevent future issues
* Fix Grafana dashboard (metric names, labels, migrate to new time
series panel)
* Add default Grafana credentials to README
* Fix the loki container being stuck on shutdown by setting shorter
timeouts (bug with the driver:
grafana/loki#2361 (comment))
* Update Grafana/Loki/Prom pinned versions (especially to get Grafana UI
improvements)

## How was this change tested?

`docker-compose up` 🙂 

<img width="2304" alt="SCR-20231201-cohy"
src="https://github.com/jaegertracing/jaeger/assets/11699655/22016bd9-0f99-40c7-be18-eb733561572a">
<img width="2304" alt="SCR-20231201-cxos"
src="https://github.com/jaegertracing/jaeger/assets/11699655/db761bc3-53ac-41fa-914d-803c73233ad7">
<img width="2304" alt="SCR-20231201-coke"
src="https://github.com/jaegertracing/jaeger/assets/11699655/004c99f0-0d1f-46f1-a9da-f50f0148d377">

## Checklist
- [x] I have read
https://github.com/jaegertracing/jaeger/blob/master/CONTRIBUTING_GUIDELINES.md
- [x] I have signed all commits
- [ ] I have added unit tests for the new functionality
- [ ] I have run lint and test steps successfully
  - for `jaeger`: `make lint test`
  - for `jaeger-ui`: `yarn lint` and `yarn test`

Signed-off-by: Stanislas Lange <git@slange.me>
@keesfluitman
Copy link

@jeschkies @btaani

From reading through the docs and this issue, I can see there are three main solutions:

1. Use Docker loki plugin with workaround to reduce max backoff/retries/timeout

2. Use Promtail Docker target (_not sure which Docker logging driver should be used in this case_)

3. Configure Docker daemon to use `json-file` or `journald` logging driver + Docker service discovery

Which is the officially recommended solution to use for new projects?

Thanks. Haven't been able to find any working solution yet. As soon as the Loki container goes offline, Im unable to restart it or otherwise, do useful stuff with docker, and only a shutdown or powerdown command properly downs my docker and restarts.
I will have to forfeit this way of gaining the docker logs. I get regular downs at night, when the loki container is somehow downed.

@dtap001
Copy link

dtap001 commented Jul 31, 2024

This is quite straightforwardly mentioned in deadlock section:
https://grafana.com/docs/loki/latest/send-data/docker-driver/#known-issue-deadlocked-docker-daemon

@danthegoodman1
Copy link

danthegoodman1 commented Jul 31, 2024

This is quite straightforwardly mentioned in deadlock section: https://grafana.com/docs/loki/latest/send-data/docker-driver/#known-issue-deadlocked-docker-daemon

When I raised the issue? Or now?

@Impact123
Copy link

Impact123 commented Aug 1, 2024

@keesfluitman
Copy link

This is quite straightforwardly mentioned in deadlock section: https://grafana.com/docs/loki/latest/send-data/docker-driver/#known-issue-deadlocked-docker-daemon

I believe i tried that once. But it's been a long time.

@jeschkies
Copy link
Contributor

I wonder if we should finally close this issue.

@longGr
Copy link

longGr commented Aug 28, 2024

I have the same problem. So I guess it's still a problem. :/

@jeschkies
Copy link
Contributor

I have the same problem. So I guess it's still a problem. :/

@longGr did you try one of the documented workarounds?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
keepalive An issue or PR that will be kept alive and never marked as stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.