Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Add early draft of Elastic Log Driver docs #15799

Merged
merged 5 commits into from
Feb 10, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
358 changes: 358 additions & 0 deletions x-pack/dockerlogbeat/docs/configuration.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,358 @@
[[log-driver-configuration]]
== {log-driver} configuration options

++++
<titleabbrev>Configuration options</titleabbrev>
++++

experimental[]

Use the following options to configure the {log-driver-long}. You can
pass these options with the `--log-opt` flag when you start a container, or
you can set them in the `daemon.json` file for all containers. For detailed
examples, see <<log-driver-usage-examples>>.

// I've included most of the options that are documented for outputs +
// logging. I've shortened and edited the descriptions because they were a bit
// unwieldy. Whichever options we decide to keep should get reviewed.

// Not sure yet if we want to provide usage examples inline in
// this topic or in a separate topic. Depends on how many options we decide to
// document. The installation section already shows the basic syntax.

[float]
=== {ecloud} options

// Do the cloud options work? It seems that we should be able to specify these
// options without having to also specify hosts, but that doesn't work.

[options="header"]
|=====
|Option | Description | Default

|`cloud.id`
|The Cloud ID found in the Elastic Cloud web console. This ID is
used to resolve the {stack} URLs when connecting to {ess} on {ecloud}.
|

|`cloud.auth`
|The username and password combination for connecting to {ess} on {ecloud}. The
format is `"username:password"`.
|
|=====

[float]
=== {es} output options

// QUESTION: Does the Elastic Log Driver support these options?
dedemorton marked this conversation as resolved.
Show resolved Hide resolved
// `output.logstash.indices
// `output.elasticsearch.pipelines`
// `output.elasticsearch.max_retries`

// QUESTION: Which SSL options do we want to document?
dedemorton marked this conversation as resolved.
Show resolved Hide resolved

// TODO: Add SSL options.


[options="header"]
|=====
|Option | Description | Default

|`output.elasticsearch.hosts`
|The list of {es} nodes to connect to. Specify each node as a `URL` or
`IP:PORT`. For example: `http://192.0.2.0`, `https://myhost:9230` or
`192.0.2.0:9300`. If no port is specified, the default is `9200`.
|`"localhost:9200"`

|`output.elasticsearch.protocol`
|The protocol (`http` or `https`) that {es} is reachable on. If you specify a
URL for `hosts`, the value of `protocol` is overridden by whatever scheme you
specify in the URL.
|`http`

|`output.elasticsearch.username`
|The basic authentication username for connecting to {es}.
|

|`output.elasticsearch.password`
|The basic authentication password for connecting to {es}.
|

|`output.elasticsearch.index`
|A {beats-ref}/config-file-format-type.html#_format_string_sprintf[format string]
value that specifies the index to write events to when you're using daily
indices. For example: +"dockerlogs-%{+yyyy.MM.dd}"+.
|?????

3+|*Advanced:*

|`output.elasticsearch.backoff.init`
|The number of seconds to wait before trying to reconnect to {es} after
a network error. After waiting `backoff.init` seconds, the {log-driver}
tries to reconnect. If the attempt fails, the backoff timer is increased
exponentially up to `backoff.max`. After a successful connection, the backoff
timer is reset.
|1s

|`output.elasticsearch.backoff.max`
|The maximum number of seconds to wait before attempting to connect to
{es} after a network error.
|60s

|`output.elasticsearch.bulk_max_size`
|The maximum number of events to bulk in a single {es} bulk API index request.
Specify 0 to allow the queue to determine the batch size.
|50

|`output.elasticsearch.compression_level`
|The gzip compression level. Valid compression levels range from 1 (best speed)
to 9 (best compression). Specify 0 to disable compression. Higher compression
levels reduce network usage, but increase CPU usage.
|0

|`output.elasticsearch.escape_html`
|Whether to escape HTML in strings.
|`false`

|`output.elasticsearch.headers`
|Custom HTTP headers to add to each request created by the {es} output. Specify
multiple header values for the same header name by separating them with a comma.
|

|`output.elasticsearch.loadbalance`
|Whether to load balance when sending events to multiple hosts. The load
balancer also supports multiple workers per host (see
`output.elasticsearch.worker`.)
|`false`

|`output.elasticsearch.parameters`
| A dictionary of HTTP parameters to pass within the URL with index operations.
|

|`output.elasticsearch.path`
|An HTTP path prefix that is prepended to the HTTP API calls. This is useful for
cases where {es} listens behind an HTTP reverse proxy that exports the API under
a custom prefix.
|

|`output.elasticsearch.pipeline`
|A {beats-ref}/config-file-format-type.html#_format_string_sprintf[format strings]
value that specifies the {ref}/ingest.html[ingest node pipeline] to write events
to.
|

|`output.elasticsearch.proxy_url`
|The URL of the proxy to use when connecting to the {es} servers. Specify a
`URL` or `IP:PORT`.
|

|`output.elasticsearch.timeout`
|The HTTP request timeout in seconds for the {es} request.
|90

|`output.elasticsearch.worker`
|The number of workers per configured host publishing events to {es}. Use with
load balancing mode (`output.elasticsearch.loadbalance`) set to `true`. Example:
If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
host).
|1

|=====


[float]
=== {ls} output options

[options="header"]
|=====
|Option | Description | Default

|`output.logstash.hosts`
|The list of known {ls} servers to connect to. If load balancing is
disabled, but multiple hosts are configured, one host is selected randomly
(there is no precedence). If one host becomes unreachable, another one is
selected randomly. If no port is specified, the default is `5044`.
|`"localhost:5044"`

|`output.logstash.index`
|The index root name to write events to. For example +"dockerlogs"+ generates
+"dockerlogs-{version}-YYYY.MM.DD"+ indices (for example,
+"dockerlogs-{version}-{docyear}.04.26"+)..
|?????

3+|*Advanced:*

|`output.logstash.backoff.init`
|The number of seconds to wait before trying to reconnect to {ls} after
a network error. After waiting `backoff.init` seconds, the {log-driver}
tries to reconnect. If the attempt fails, the backoff timer is increased
exponentially up to `backoff.max`. After a successful connection, the backoff
timer is reset.
|1s

|`output.logstash.backoff.max`
|The maximum number of seconds to wait before attempting to connect to
{ls} after a network error.
|60s

|`output.logstash.bulk_max_size`
|The maximum number of events to bulk in a single {ls} request. Specify 0 to
allow the queue to determine the batch size.
|2048

|`output.logstash.compression_level`
|The gzip compression level. Valid compression levels range from 1 (best speed)
to 9 (best compression). Specify 0 to disable compression. Higher compression
levels reduce network usage, but increase CPU usage.
|0

|`output.logstash.escape_html`
|Whether to escape HTML in strings.
|`false`

|`output.logstash.loadbalance`
|Whether to load balance when sending events to multiple {ls} hosts. If set to
`false`, the driver sends all events to only one host (determined at random) and
switches to another host if the selected one becomes unresponsive.
|`false`

|`output.logstash.pipelining`
|The number of batches to send asynchronously to {ls} while waiting for an ACK
from {ls}. Specify 0 to disable pipelining.
|2

|`output.logstash.proxy_url`
|The URL of the SOCKS5 proxy to use when connecting to the {ls} servers. The
value must be a URL with a scheme of `socks5://``. You can embed a
username and password in the URL (for example,
`socks5://user:password@socks5-proxy:2233`).
|

|`output.logstash.proxy_use_local_resolver`
|Whether to resolve {ls} hostnames locally when using a proxy. If `false`,
name resolution occurs on the proxy server.
|`false`

|`output.logstash.slow_start`
|When enabled, only a subset of events in a batch are transferred per
transaction. If there are no errors, the number of events per transaction
is increased up to the bulk max size (see `output.logstash.bulk_max_size`).
On error, the number of events per transaction is reduced again.
|`false`

|`output.logstash.timeout`
|The number of seconds to wait for responses from the {ls} server before
timing out.
|30

|`output.logstash.ttl`
|Time to live for a connection to {ls} after which the connection will be
re-established. Useful when {ls} hosts represent load balancers. Because
connections to {ls} hosts are sticky, operating behind load balancers can lead
to uneven load distribution across instances. Specify a TTL on the connection
to distribute connections across instances. Specify 0 to disable this feature.
This option is not supported if `output.logstash.pipelining` is set.
|0

|`output.logstash.worker`
|The number of workers per configured host publishing events to {ls}. Use with
load balancing mode (`output.logstash.loadbalance`) set to `true`. Example:
If you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
host).
|1

|=====


[float]
=== Logging options
dedemorton marked this conversation as resolved.
Show resolved Hide resolved

//Did not include logging.files.redirect_stderr because it's experimental.

[options="header"]
|=====
|Option | Description | Default

|`logging.level`
|Minimum log level. One of `debug`, `info`, `warning`, or `error`.
|`info`

|`logging.to_stderr`
|When `true`, writes all logging output to standard error output.
|`false`

|`logging.to_syslog`
|When `true`, writes all logging output to the syslog. Not supported on Windows.
|`false`

|`logging.to_eventlog`
|When `true`, writes all logging output to the Windows Event Log.
|`false`

|`logging.to_files`
|When `true`, writes all logging output to files. The log files are automatically
rotated when the log file size limit is reached.
|`true`

|`logging.selectors`
|A list of debugging-only selector tags. Use `*` to enable debug output for all
components. For example, add `publish` to display all the debug messages related
to event publishing.
|

|`logging.files.path`
|The directory that log files are written to.
|????

|`logging.files.name`
|The name of the file that logs are written to.
|`docker.*`

|`logging.json`
|When `true`, logs messages in JSON format.
|`false`

3+|*Advanced:*

|`logging.metrics.enabled`
|If `true`, the {log-driver} periodically logs internal metrics that have
changed in the last period. For each metric that changed, the delta from the
value at the beginning of the period is logged. Also, the total values for all
non-zero internal metrics are logged on shutdown.
|`true`

|`logging.files.interval`
|Enable log file rotation on time intervals in addition to size-based rotation.
Intervals must be at least 1s. Values of 1m, 1h, 24h, 7*24h, 30*24h, and 365*24h
are boundary-aligned with minutes, hours, days, weeks, months, and years as
reported by the local system clock. All other intervals are calculated from the
unix epoch.
|0

|`logging.files.keepfiles`
|The number of most recent rotated log files to keep on disk. Older files are
deleted during log rotation. Valid range is 2 to 1024 files.
|7

|`logging.files.permissions`
|The permissions mask to apply when rotating log files. The default value is
0600. The `permissions` option must be a valid Unix-style file permissions mask
expressed in octal notation. In Go, numbers in octal notation must start with
'0'.
|

|`logging.metrics.period`
|The period after which to log the internal metrics.
|30s

|`logging.files.rotateeverybytes`
|The maximum size of a log file. If the limit is reached, a new log file is
generated.
|10485760 (10 MB).

|`logging.files.rotateonstartup`
|When `true`, rotates existing logs on startup rather than appending to the
existing file.
|`true`

|=====
21 changes: 21 additions & 0 deletions x-pack/dockerlogbeat/docs/index.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
= Elastic Log Driver for Docker

:libbeat-dir: {docdir}/../../../libbeat/docs
:log-driver: Elastic Log Driver
:log-driver-long: Elastic Log Driver for Docker
:log-driver-alias: elastic-logging-plugin
:docker-version: ???
dedemorton marked this conversation as resolved.
Show resolved Hide resolved

include::{libbeat-dir}/version.asciidoc[]

include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[]

include::{asciidoc-dir}/../../shared/attributes.asciidoc[]

include::overview.asciidoc[]

include::install.asciidoc[]

include::configuration.asciidoc[]

include::usage.asciidoc[]
Loading