Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specify logging driver and options for docker driver #688

Closed
pajel opened this issue Jan 18, 2016 · 78 comments
Closed

Specify logging driver and options for docker driver #688

pajel opened this issue Jan 18, 2016 · 78 comments

Comments

@pajel
Copy link

pajel commented Jan 18, 2016

Please correct me if I am wrong, but I couldn't find in the documentation how to pass the log-driver and log-opt arguments to containers when running them as Nomad tasks, e.g.:
--log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup --log-opt awslogs-stream=myLogStream

I know I can configure the docker daemon with these arguments, but then I can't specify different logstreams for each container.
If this is currently not possible, I would like to request it as a feature.
Thank you

@diptanu
Copy link
Contributor

diptanu commented Jan 18, 2016

@pajel Nomad is going to come with logging support in the next release, users will be able to specify a syslog endpoint that Nomad is going to push logs to.

The easiest way to push logs to cloudwatch would be to push them through a logging middleware such as logstash which have Syslog input and can forward logs to a bunch of places. Would that work?

You will be able to stream logs via Nomad cli for running tasks directly too.

@pajel
Copy link
Author

pajel commented Jan 18, 2016

@diptanu thank you for your reply. Running a logstash would, however, seem to me as an unnecessary overhead. Since 1.9 docker has the awslog driver built in, so everything Nomad would need to do is to pass the 4 above mentioned arguments to the docker run, and everything else is taken care of by the docker daemon.
I believe it would be appreciated by more people, especially from small to medium businesses, as configuring and supporting another middleware adds cost and complexity. Thanks for considering it, I will await your decision.

@pires
Copy link

pires commented Jan 18, 2016

+1 on configurable Docker logs rotation, i.e. to prevent server disks from filling up.

Now, regarding log dispatchment, I agree with @pajel if you're only targeting Docker logging - which I think you're not, right?

@pajel
Copy link
Author

pajel commented Jan 18, 2016

@pires yes, I am targeting only the Docker logs. I am not interested in Nomad logs here. To explain a little bit:
We have our applications inside containers configured to log on stdout and stderr. Then, by simply adding the 4 arguments mentioned above to the docker run command, we get all application logs in AWS Cloudwatch and can easily parse them for exceptions, create alarms, or scaling actions based on the logs. We don't need to care where the containers are running to search for logs, or install any other middleware. It's simple, fast, and convenient.

Does it make sense? :)

@diptanu
Copy link
Contributor

diptanu commented Jan 18, 2016

@pajel Totally understand that your setup is convenient and has less moving parts. It's a pain that cloud watch unlike other cloud logging-as-a-service providers don't have syslog endpoints. If they did, it would have worked out of the box.

The reason why we are doing log rotation and streaming logs in Nomad is that we want users to be able to use the Nomad cli and run nomad logs --alloc 12345 --task redis and stream logs directly if they want to debug their applications in real time and at the same time push logs to a centralised logging service. Nomad also supports heterogeneous workload, which means you could run a binary or any other well packaged applications in a cgroup without needing docker at all. So we would want a solution which works for most of the use cases.

Having said that let me think how we can solve this problem for you without needing you to run a syslog log forwarder.

@pajel
Copy link
Author

pajel commented Jan 18, 2016

Thanks for explaining. Yeah, my use-case is only for a small part of Nomad - docker task - and only on AWS, so I understand that it might not be a priority. However, it's a built-in Docker feature, so why not to leverage it..
Just as a side-note, an example configuration might look like this:
For awslogs:

task "webservice" {
    driver = "docker"
    config = {
        image = "redis"
        logging {
            driver = "awslogs"
            option = "awslogs-region=us-east-1"
            option = "awslogs-group=myLogGroup"
            option = "awslogs-stream=myLogStream"
        }
    }
}

For json-file:

        logging {
            driver = "json-file"
            option = "max-size=[0-9+][k|m|g]"
            option = "max-file=[0-9+]"
            option = "labels=label1,label2"
            option = "env=env1,env2"
        }

Possible values and their options would match the Docker built-in logging drivers explained here:
https://docs.docker.com/engine/reference/logging/overview/

@pires - I believe it would solve your Docker logs rotation problem as well.

Thanks for your time. Much appreciated.

@pires
Copy link

pires commented Jan 19, 2016

@pajel yes, I'm aware of the drivers but if Nomad implements it, I don't need to configure Docker on every Docker host.

@ilijaljubicic
Copy link

we also use docker log drivers to push logs from container stdout to fluentd ( which is in docker container. in our case). the point here in using docker log driver is that docker containers can point to different log collectors and it makes it flexible. rather then having syslog as central log collector, this way logs can be distributed to different collectors by just pointing to the one when running container.

this makes lots of sense to us because we are still experimenting with different ways to handle logs and containers outputs.

above scenario can be used, not only for logging, but for some data processing pipelines where, for example, application would be passing data streams to stdout of docker container and through docker log driver it would be redirected to fluentd, and fluentd would sent it to kafka (so first container would not need to be registered as kafka producer...for example).

@diptanu
Copy link
Contributor

diptanu commented Jan 23, 2016

@engine07 So your use case would work with our current design. Fluentld has a syslog input plugin, which you can use and in the nomad job configuration you will have to just mention the syslog input port and nomad will push all the logs from tasks to fluentd and then you can do whatever you want to do with your logs. Does that makes sense?

@ilijaljubicic
Copy link

@diptanu Actually it does. Was not aware that in nomad configuration on job level it would be possible to specify syslog input.

@diptanu
Copy link
Contributor

diptanu commented Jan 25, 2016

@engine07 Yeah the PR hasn't landed yet. Working on it right now!

@c4milo
Copy link
Contributor

c4milo commented Feb 27, 2016

Any update on this?

@marceldegraaf
Copy link

@diptanu any updates on this?

@diptanu
Copy link
Contributor

diptanu commented Mar 15, 2016

@c4milo @marceldegraaf So for 0.3 we haven't done any work on forwarding log messages to a remote syslog endpoint. But we write all the logs into a /alloc/logs so you can use any log collector like aws cloudwarch agent as one of your tasks in task group and ask it to scrape for files in that directory and push it to cloudwatch or any other service.

Would that work as a stop gap workaround until we have the remote syslog endpoint? We just need some more bandwidth to do that work.

@marceldegraaf
Copy link

Thanks for your response @diptanu.

I wasn't aware of the /alloc/logs directory. Is that a stable location for stdout/stderr of job logs? Are all job logs written here, or only Docker logs? Are these files rotated automatically by Nomad, or will they grow until we run out of space?

Currently I grab all Docker container logs with logspout but if the /alloc/logs location is stable I might use that in conjunction with Filebeat to forward to Logstash. That lets me grab all logs (not just Docker) and I won't have to mount the Docker socket on the Logspout container.

@diptanu
Copy link
Contributor

diptanu commented Mar 16, 2016

@marceldegraaf All logs of tasks in an allocation are written into '/alloc/logs'

@marceldegraaf
Copy link

Thanks, and are those rotated by Nomad? Is that location stable or is it expected to change in future Nomad versions?

@diptanu
Copy link
Contributor

diptanu commented Mar 16, 2016

Yes they are rotated by Nomad! Please see the documentation regarding how you can configure the behaviour of the rotation. And we don't think the location is going to change in the future.

Sent from my iPhone

On Mar 16, 2016, at 11:53 AM, Marcel de Graaf notifications@github.com wrote:

Thanks, and are those rotated by Nomad? Is that location stable or is it expected to change in future Nomad versions?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub

@vladimir-kozyrev
Copy link

Hi @diptanu,
Please clarify the following:

in the nomad job configuration you will have to just mention the syslog input port and nomad will push all the logs from tasks to fluentd

How can I specify that in job configuration?
I don't see that option in the documentation.

@dadgar
Copy link
Contributor

dadgar commented Apr 11, 2016

@fieryvova for the docker driver, we automatically forward the logs to our syslog collector so we can rotate the logs.

@vladimir-kozyrev
Copy link

@dadgar, OK.
It seems that there is no easy way to forward STDOUT from container using Nomad.
--log-driver and --log-opt are native features and I can't encapsulate them in a Docker image.
Support for those features would be highly appreciated.

@ketzacoatl
Copy link
Contributor

I have a question for you @dadgar... from what I can see, the intention is to have nomad logs foo give you more direct access to the logs from a task, irrespective of where you are on the cluster, and which node the task is actually running on.

The question: would we be able to use nomad logs foo, while also having logs from a task in a container sent to a central store such as ELK, some other syslog collector, or a centralized logging service like loggly? In other words, would one interfere with the other?

@dadgar
Copy link
Contributor

dadgar commented Apr 11, 2016

@ketzacoatl That is a goal. Log forwarding did not make the 0.3.X cut but will come in the future. For now you can have a log shipper that runs in the same task group that ships where needed. This is what we do in our production.

@dadgar
Copy link
Contributor

dadgar commented Apr 11, 2016

@fieryvova I am not sure I understand, the STDOUT and STDERR are forwarded to Nomad.

@marceldegraaf
Copy link

@fieryvova I'm simply running filebeat to monitor Nomad's job logs (in my case in /opt/nomad/data/alloc/*/alloc/logs/*) and forward them to ElasticSearch.

Works like a charm, and filebeat's memory footprint is considerably smaller than that of Logstash.

@vladimir-kozyrev
Copy link

@dadgar, yes, sure, but path that looks like /var/nomad/alloc/0f48503b-c0a8-8275-a423-3cd5310c83ad/alloc/logs/app.stdout.0 is not something that can be easily configured with log forwarder in my case.

@marceldegraaf, thanks for an option.

My use case is the following:
There are multiple containers that run in a cluster and produce very similar log output, but I have to make distinction between them. With --log-driver and --log-opt I can tag containers before running them, but if I forward /opt/nomad/data/alloc/*/alloc/logs/*, I will have to write complex tagging rules to separate one log from others.

Does it make sense?

@marceldegraaf
Copy link

@fieryvova I see. AFAIK there's no way to do that automatically now with Nomad and Docker. You could use Logstash on the container runners to add the job's UUID to the collected log events, but that may not be very useful.

@pshima
Copy link
Contributor

pshima commented Apr 12, 2016

@fieryvova we do something similar in production that may work for you.

In our task group we have 2 tasks

  1. The application running a docker container (stderr and stdout go to the alloc/logs dir)
  2. A docker container that runs a log shipping agent that ships the logs from the relative alloc/logs dir

Because we know what app is running in 1, we can "tag" the logs with whatever data we want to help us identify them. In our case we "tag" them with the task group name, and the allocation id so they are easily recognizable in our centralized logging solution. We do this by passing environment variables in the nomad job file which then a wrapper script uses to write our centralized logging config. So then we use the same container for all logging jobs and just pass in environment variables for any dynamic config.

I am not sure that solves your use case as I do not use logstash but I hope it gives you a bit more detail on one possible solution. This does have the downside of needing to run a log collector coprocess for each task but we're overall happy with it as our agent is fairly lightweight.

@Fluxxo
Copy link

Fluxxo commented Sep 21, 2016

Dear all,

I faced the same problem as well because I needed to integrate my docker container into our existing SPLUNK infrastructure with all its alerting, escalation management, you name it.
Another reason why we have to log into a central logging server is, that my teams of developers are not allowed to have access to production systems, hence are not allowed to watch logfiles generating error messages in production.

I extended the docker driver configuration by a logging block:

logging {
type = "syslog" {
syslog-address = "tcp://127.0.0.1:1415"
tag = "your/logging/tag/{{.ImageName}}/{{.Name}}"
format = "rfc3164"
}
}

All docker options may be passed in as desired and here at my company, it works without any troubles.
https://docs.docker.com/engine/admin/logging/overview/

I also added support to mount-in folders from the host. This is a side feature that I need to mount in credentials/certificates I don't want to see baked into my docker files.

I also don't support Alex' argumentation about the abstraction. From my point of view, the abstraction is the ability to plug-in drivers and the drivers should leverage the feature set of the underlying technology, in this case docker. Abstracting away features is not the way to go, I think. It's simply not taking into account how people actually use the software but is based on assumptions and prettiness.
This is really by no means an offense - I really love what hashicorp is doing, but I simply disaggree in this case.

For all those wanting to use nomad 0.4.1 with the logging settings provided, I created a fork and released a new version (0.4.2) here: https://github.com/Fluxxo/nomad/releases

As I don't think that the project's maintainer will merge the pull request I don't even bother submitting one :)

@ketzacoatl
Copy link
Contributor

@diptanu, is it undesirable to consider accepting this type of PR, and in the future, remove/update/refactor into the more ideal you desire?

@Fluxxo
Copy link

Fluxxo commented Sep 21, 2016

Someone just gave me a hint that there's a bug with the default docker values.

I will fix it ASAP and release a new version on m fork.

Sorry for this, but it's my first lines of go code :)

@dadgar
Copy link
Contributor

dadgar commented Sep 21, 2016

@Fluxxo Good job! All for fixing real problems you have! I hope down the line you will agree with me when one logging config applies to all drivers :)

@ketzacoatl As for merging this, no we will not be. Nomad 0.6 will bring plugin support and one of the plugin types will be for logging.

@avinson
Copy link

avinson commented Sep 21, 2016

@dadgar while I appreciate the work towards nomad plugin support and the overall elegance of the design of nomad, I still feel strongly that this is a case where misguided design philosophy or business strategy hurts people that actually use nomad day-to-day.

Docker currently has 9 logging drivers, many of which are highly specialized like gcplogs or awslogs. Presumably, when nomad plugins land in 3-6 months, none of these will be supported initially and will have to be reimplemented (or may only be available in some enterprise version). Given that the docker ecosystem will always be much larger than the nomad ecosystem, it's pretty much guaranteed that nomad logging plugins will always be less varied and less functional than their docker equivalents.

Being able to use one logging config across all drivers is pretty neat.. a nice elegant design.. but it doesn't help me at all. However, being able to use docker's existing functionality would help me greatly and immediately. There's simply no good reason not to allow operators to opt-out and pass whatever options they want to the docker API. Yes, maybe that would break things in certain cases but if it's opt-out then I'm explicitly agreeing to take that risk.

Btw, I really love all the work you guys are doing at hashicorp. I attended hashiconf and have been using consul and other hashicorp tools in production for almost two years. I am really hoping you will reconsider this business/design strategy of wrapping docker and not allowing for any kind of opt-out. It seems obvious that users want this and that it will help speed nomad adoption and grow the ecosystem.

@erkki
Copy link

erkki commented Sep 21, 2016

As an datapoint, using multiple nomad drivers together, I can appreciate the core teams decision to keep semantics and promises clean and tight (and uniform across drivers). I would rather configure logging uniformly at nomad level than deal with configuring drivers separately.

Allowing the passing of arbitrary options to drivers might be an necessary evil though, perhaps guarded by an admin setting similar to the raw_exec driver?

@ketzacoatl
Copy link
Contributor

I'm 100% for clean and elegant design, however I cannot fathom why there is zero interest in making this available immediately so users have something meaningful here and now (and removing that in the future when a more perfect design has been implemented).

It's really hard to sell nomad when support for volumes, logging, etc is blocked like this. Kubernetes moves fast, is terribly designed, and a complete mess (if you ask me), but their support for basics means organizations choose it over better solutions like nomad.

@c4milo
Copy link
Contributor

c4milo commented Sep 22, 2016

I personally prefer slow and well done than fast and unreliable. Take your
time to do well. There are different ways to work around this in the mean
time.
On Thu, Sep 22, 2016 at 12:11 AM ketzacoatl notifications@github.com
wrote:

I'm 100% for clean and elegant design, however I cannot fathom why
there is zero interest in making this available immediately so users
have something meaningful here and now (and removing that in the future
when a more perfect design has been implemented).

It's really hard to sell nomad when support for volumes, logging, etc is
blocked like this. Kubernetes moves fast, is terribly designed, and a
complete mess (if you ask me), but their support for basics means
organizations choose it over better solutions like nomad.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#688 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACi0Qg53fmVFwCu9S6KbUElNVliOz2Aks5qsf_XgaJpZM4HGmNi
.

@ketzacoatl
Copy link
Contributor

There is absolutely no reason why a solution for the immediate here and now has to be unreliable. It's totally possible to improve the immediate situation in core, while playing well with future plans.

@c4milo
Copy link
Contributor

c4milo commented Sep 22, 2016

You can easily patch the Docker driver to do what you want to do while the
official support comes out.
On Thu, Sep 22, 2016 at 12:31 AM ketzacoatl notifications@github.com
wrote:

There is absolutely no reason why a solution for the immediate here and
now has to be unreliable. It's totally possible to improve the immediate
situation in core, while playing well with future plans.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#688 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACi0cKJpdUiupwER6E7iR3Vdz6w37nqks5qsgSLgaJpZM4HGmNi
.

@ketzacoatl
Copy link
Contributor

Maybe for some, but certainly not for all. Maintaining forks can carry significant costs (such as needing to also maintain the release distribution rather than relying on what upstream publishes).

@hatt
Copy link

hatt commented Sep 22, 2016

I'm with @ketzacoatl here. While it's possible for me to maintain a Nomad fork with a patch on it until the functionality desired is in core, that doesn't mean the rest of my team can do it. There's a lot of overhead in packaging forked code into enterprise or other production environments and many organisations will use that as a reason to not adopt software. My company currently uses Serf, Terraform, and Consul but not having control of Docker logging in Nomad is a deal breaker and we're looking to go back to ECS or at least Kubernetes. Once a solution like that is implemented, it's almost impossible to move to something else even if it gets the functionality desired in the future.

Not having such a basic thing available now effectively blocks Nomad as a solution for many organisations until they re-evaluate all their stack, regardless of how good Nomad becomes.

@Fluxxo
Copy link

Fluxxo commented Sep 22, 2016

Hi there,

as announced, I release a new version, v0.4.3 here: https://github.com/Fluxxo/nomad/releases/tag/v0.4.3
It fixes a bug that prevents docker from starting container(s) when there was another logging driver selected than syslog. This time I tested against syslog, journald, json-file and splunk. It works like a charme :)

I hope you like the release and I welcome any input.

Following up the discussion here, I'd like to state something again - and I'll try to stay as neutral as I can.

As already mentioned, I really love what hashicorp is doing. In my opinion, Hashicorp stands for software that just works, with little overhead, solving real world problems.
Furthermore, I think a product can only be successful, if the community and users love it. In my understanding, it's the best asset you guys from hashicorp can have: User's loving your product and sticking with you because you do listen and react.

Let me tell you a short story:

We were using etcd (yes, my fault :D) in conjunction with confd (Issue: kelseyhightower/confd#102), which is actually pretty similar to consul-template written and maintained by @kelseyhightower. Though I do respect Kelseys opinion on that point, I think it disrespects the community using this piece of software - there is a need and argumentation is about cleanliness and the confd model, not about the problems to solve.
This is a point that I often observe when talking to software architects: The focus is more on structure & cleanliness than on the problem. A lot of projects go down the river that way and I don't consider this to be an agile mindset.

This decision led to two points.

  • Kicking out confd
  • Kicking out etcd
  • using consul & consul-template

Let's port this issue to this discussion:

@dadgar tells us, the docker logging driver option does not fit into the abstraction model nomad tries to keep up. In another post, it is mentioned, that this might be a pro-feature you have to pay for. Another time later, we're talking about plugins. I do love plugins as they promise a lot of flexibility but sources the troubles to the community.
I totally agree with @avinson that the decision disrespects the needs of the users. Kubernetes is really a complete pain, complex to setup and even more complex to understand. Same goes for Mesos. Nomad is clean, leightweight, well documented and integrates nicely into hashicoprs eco system. And now you're going to tell me leveraging the docker API completely will break nomads design?
This doesn't make sense to me. Currently, taking the 0.4x branch nomad already has plugins and they're called drivers.

Covered from a devops perspective, I am totally and absolutely fine when logging options are held in manifests and have different options from team to team. Different logging infrastructure needs different logging settings. Period.

I know this sounds very emotional, but I hope you get the point: I want real world problems to be solved and not a software that wins a design contest. If both workds together, it is clearly the silver bullet. In this case I don't see anything breaking.

One more point towards @c4milo: Maintaining a fork and patching the driver each time an update comes out just is not the right solution to go for. As soon as you work in a company with structural IT governance processes it will become difficult to get this working over a longer period of time.
And it does not correspond to my mindset of delivering things in a high quality ind short cycle times. Not everyone is able to write go code, set environment up, spread knowledge across the team and so on. This is just more than patching the driver and go for it.

Ok, so I hope you don't get me wrong. I just want the best for us all, users and maintainers as well :)

@erkki
Copy link

erkki commented Sep 22, 2016

Why the resistance to using a sidecar solution to bridge the time gap until plugins? Sure, it's a bit of more work, but so is adding short-lived features and needing to deprecate them later. I'm sure you can already appreciate the extra work required even for "trivial" features yourself now.

@cleiter
Copy link

cleiter commented Sep 22, 2016

Is there an example how such a sidecar solution would work? Since I couldn't configure nomad to use the docker awslogs driver I wrote my own awslogs appender for logback... But I would rather use something standard.

@erkki
Copy link

erkki commented Sep 22, 2016

I'm not sure there's a canonical example anywhere, and logging setups tend to vary hugely. I'm planning on using fluentd as the sidecar, forwarding to central logging.

@Fluxxo
Copy link

Fluxxo commented Sep 22, 2016

@erkki That's why I'm releasing on my own now. Deprecating is a topic, I am with you.
So is the pain to setup and remove extra setups you might need to implement.

@ketzacoatl
Copy link
Contributor

Regarding sidecars, they may work for some, but they also add performance overhead that isn't always welcome or feasible to allow (with a sidecar, you run one per job, so if you have multiple jobs per host, you can end up spending more CPU/mem on logging than on doing actual work).

@dadgar
Copy link
Contributor

dadgar commented Sep 22, 2016

Given this more thought and have decided to do the following: add an operator option that allows the docker driver to change logging behavior and then add a config to the docker driver to set the logging driver.

@Fluxxo Could you please open a PR with what you have.

Have also filed an issue against Docker so that we don't have to sacrifice nomad logs capability in the case that a logging driver is set: moby/moby#26829

@dadgar dadgar reopened this Sep 22, 2016
@dadgar dadgar added this to the v0.5.0 milestone Sep 22, 2016
@Fluxxo
Copy link

Fluxxo commented Sep 23, 2016

I'll submit the PR ASAP, currently busy getting into my weekend :)

@Fluxxo
Copy link

Fluxxo commented Sep 23, 2016

I just created the pull request.
Please get in touch with me if there's something unclear.

@dadgar
Copy link
Contributor

dadgar commented Oct 10, 2016

Fixed by #1797

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 19, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests