Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run EA container + managed by fleet + standalone + autodiscovery #2366

Merged
merged 18 commits into from
Nov 21, 2022
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
ed91154
Run EA container + managed by fleet + standalone + autodiscovery
constanca-m Nov 15, 2022
fda02ee
Update docs/en/ingest-management/elastic-agent/configuration/autodisc…
constanca-m Nov 16, 2022
3c1a6c0
Update docs/en/ingest-management/elastic-agent/configuration/autodisc…
constanca-m Nov 16, 2022
8943aae
Update docs/en/ingest-management/elastic-agent/configuration/autodisc…
constanca-m Nov 16, 2022
857accd
Update docs/en/ingest-management/elastic-agent/configuration/autodisc…
constanca-m Nov 16, 2022
849203d
Update docs/en/ingest-management/elastic-agent/configuration/autodisc…
constanca-m Nov 16, 2022
2a83c6e
Update docs/en/ingest-management/elastic-agent/run-container-common/d…
constanca-m Nov 16, 2022
761cb1b
Apply suggestions.
constanca-m Nov 17, 2022
19cd7c5
Corrected Pod to pod on yaml
constanca-m Nov 17, 2022
4f97593
Changed provider link on autodiscover.
constanca-m Nov 18, 2022
65c4376
Update docs/en/ingest-management/elastic-agent/running-on-kubernetes-…
constanca-m Nov 18, 2022
c9303ff
Update docs/en/ingest-management/elastic-agent/running-on-kubernetes-…
constanca-m Nov 18, 2022
5fc2303
Update docs/en/ingest-management/elastic-agent/running-on-kubernetes-…
constanca-m Nov 18, 2022
ee84ed9
Update docs/en/ingest-management/elastic-agent/running-on-kubernetes-…
constanca-m Nov 18, 2022
055f367
Update docs/en/ingest-management/elastic-agent/configuration/autodisc…
constanca-m Nov 21, 2022
54f9253
Some corrections.
constanca-m Nov 21, 2022
0127865
Removed unnecessary annotations.
constanca-m Nov 21, 2022
bb85343
Merge branch 'main' into update-docs
constanca-m Nov 21, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[[elastic-agent-kubernetes-autodiscovery]]
= Kubernetes autodiscovery with {agent}

When you run applications on containers, they become moving targets to the monitoring system. Autodiscover allows you to track them and adapt settings as changes happen. By defining configuration templates, the autodiscover subsystem can monitor services as they start running.

To use autodiscovery, you will need to modify the manifest file of the {agent}. Refer to <<running-on-kubernetes-standalone>> to learn how to retrieve and configure it.

constanca-m marked this conversation as resolved.
Show resolved Hide resolved
There are two different ways to use autodiscovery:

* <<conditions-based-autodiscover>>

* <<hints-annotations-autodiscovery>>
Original file line number Diff line number Diff line change
@@ -0,0 +1,294 @@
[[conditions-based-autodiscover]]
= Conditions based autodiscover

You can define autodiscover conditions in each input to allow {agent} to automatically identify Pods and start monitoring them using predefined integrations. Refer to <<elastic-agent-input-configuration>> to get an idea .

== Example: Target Pods by label

To automatically identify a Redis Pod and monitor it with the Redis integration, uncomment the following input configuration inside the https://github.com/elastic/elastic-agent/blob/main/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml[{agent} Standalone manifest]:


[source,yaml]
------------------------------------------------
- name: redis
type: redis/metrics
use_output: default
meta:
package:
name: redis
version: 0.3.6
data_stream:
namespace: default
streams:
- data_stream:
dataset: redis.info
type: metrics
metricsets:
- info
hosts:
- '${kubernetes.pod.ip}:6379'
idle_timeout: 20s
maxconn: 10
network: tcp
period: 10s
condition: ${kubernetes.labels.app} == 'redis'
------------------------------------------------

The condition `${kubernetes.labels.app} == 'redis'` will make the {agent} look for a Pod with the label `app:redis` within the scope defined in its manifest.

constanca-m marked this conversation as resolved.
Show resolved Hide resolved
For a list of provider fields that you can use in conditions, refer to <<kubernetes-provider>>.


The `redis` input defined in the {agent} manifest only specifies the`info` metricset. To learn about other available metricsets and their configuration settings, refer to the {metricbeat-ref}/metricbeat-module-redis.html[Redis module page].

To deploy Redis, you can apply the following example manifest:

[source,yaml]
------------------------------------------------
apiVersion: v1
constanca-m marked this conversation as resolved.
Show resolved Hide resolved
kind: Pod
metadata:
name: redis
annotations:
co.elastic.hints/package: redis
constanca-m marked this conversation as resolved.
Show resolved Hide resolved
co.elastic.hints/host: '${kubernetes.pod.ip}:6379'
labels:
k8s-app: redis
app: redis
spec:
containers:
- image: redis
imagePullPolicy: IfNotPresent
name: redis
ports:
- name: redis
containerPort: 6379
protocol: TCP
------------------------------------------------

You should now be able to see Redis data flowing in on index `metrics-redis.info-default`. Make sure the port in your Redis manifest file matches the port used in the Redis input.

NOTE: All assets (dashboards, ingest pipelines, and so on) related to the Redis integration are not installed. You need to explicitly <<install-uninstall-integration-assets,install them through {kib}>>.

To set the target host dynamically for a targeted Pod based on its labels, use a variable in the {agent} policy to return path information from the provider:

[source,yaml]
----
- data_stream:
dataset: kubernetes.scheduler
type: metrics
metricsets:
- scheduler
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
hosts:
- 'https://${kubernetes.pod.ip}:10259'
period: 10s
ssl.verification_mode: none
condition: ${kubernetes.labels.component} == 'kube-scheduler'
----

WARNING: In some "As a Service" Kubernetes implementations, like GKE, the control plane nodes or even the Pods running on them won’t be visible. In these cases, it won’t be possible to use scheduler metricsets, necessary for this example. Refer https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html#_scheduler_and_controllermanager[scheduler and controller manager] to find more information.
Copy link
Contributor

@tetianakravchenko tetianakravchenko Nov 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In these cases, it won’t be possible to use scheduler metricsets, necessary for this example.

is scheduler is really necessary for this example? I mean in context of condition autodiscover?

It is quite confusing to have scheduler sample in between of redis examples.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is important because readers get an idea how to set the host dynamically, and if we don't include it they may not get there on their own. I can only speak for myself, but as someone with little experience with Kubernetes, it would took me awhile to get there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we have the similar configuration for the redis:

hosts:
        - '${kubernetes.pod.ip}:6379'

and with redis example can be achieved the same, no? (if there will be deployed multiple redis pods)

what I meant: it is confusing to get scheduler example, it doesn't seem to be relevant to what was said before - redis, and after - redis. Maybe it should be in separate block, not in == Example: Target Pods by label

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first it was indeed a separate block. But after discussion, it changed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't understand why example with scheduler is needed here, and what value does it bring if there exists already example with ${kubernetes.pod.ip}.

As you already have enough reviews - you can merge this PR


Following the Redis example, if you deploy another Redis Pod with a different port, it should be detected. To check this, go, for example, to the field `service.address` under `metrics-redis.info-default`. It should be displaying two different services.

To obtain the policy generated by this configuration, connect to {agent} container:

["source", "sh", subs="attributes"]
------------------------------------------------
kubectl exec -n kube-system --stdin --tty elastic-agent-standalone-id -- /bin/bash
------------------------------------------------

Do not forget to change the `elastic-agent-standalone-id` to your {agent} Pod's name. Moreover, make sure that your Pod is inside `kube-system`. If not, change `-n kube-system` to the correct namespace.

Inside the container <<elastic-agent-cmd-options, inspect the output>> of the configuration file you used for the {agent}:

["source", "sh", subs="attributes"]
------------------------------------------------
elastic-agent inspect output -o default -c /etc/elastic-agent/agent.yml
------------------------------------------------

[%collapsible]
.You should now be able to see the generated policy. If you look for the `scheduler`, it will look similar to this.
====
[source,yaml]
----
- bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
hosts:
- https://172.19.0.2:10259
index: metrics-kubernetes.scheduler-default
meta:
package:
name: kubernetes
version: 1.9.0
metricsets:
- scheduler
module: kubernetes
name: kubernetes-node-metrics
period: 10s
processors:
- add_fields:
fields:
labels:
component: kube-scheduler
tier: control-plane
namespace: kube-system
namespace_labels:
kubernetes_io/metadata_name: kube-system
namespace_uid: 03d6fd2f-7279-4db4-9a98-51e50bbe5c62
node:
hostname: kind-control-plane
labels:
beta_kubernetes_io/arch: amd64
beta_kubernetes_io/os: linux
kubernetes_io/arch: amd64
kubernetes_io/hostname: kind-control-plane
kubernetes_io/os: linux
node-role_kubernetes_io/control-plane: ""
node_kubernetes_io/exclude-from-external-load-balancers: ""
name: kind-control-plane
uid: b8d65d6b-61ed-49ef-9770-3b4f40a15a8a
pod:
ip: 172.19.0.2
name: kube-scheduler-kind-control-plane
uid: f028ad77-c82a-4f29-ba7e-2504d9b0beef
target: kubernetes
- add_fields:
fields:
cluster:
name: kind
url: kind-control-plane:6443
target: orchestrator
- add_fields:
fields:
dataset: kubernetes.scheduler
namespace: default
type: metrics
target: data_stream
- add_fields:
fields:
dataset: kubernetes.scheduler
target: event
- add_fields:
fields:
id: ""
snapshot: false
version: 8.3.0
target: elastic_agent
- add_fields:
fields:
id: ""
target: agent
ssl.verification_mode: none
----
====

== Example: Dynamic logs path

To set the log path of Pods dynamically in the configuration, use a variable in the
{agent} policy to return path information from the provider:

[source,yaml]
----
- name: container-log
id: container-log-${kubernetes.pod.name}-${kubernetes.container.id}
type: filestream
use_output: default
meta:
package:
name: kubernetes
version: 1.9.0
data_stream:
namespace: default
streams:
- data_stream:
dataset: kubernetes.container_logs
type: logs
prospector.scanner.symlinks: true
parsers:
- container: ~
paths:
- /var/log/containers/*${kubernetes.container.id}.log
----

[%collapsible]
.The policy generated by this configuration will look similar to this for every Pod inside the scope defined in the manifest.
====
[source,yaml]
----
- id: container-log-etcd-kind-control-plane-af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819
constanca-m marked this conversation as resolved.
Show resolved Hide resolved
index: logs-kubernetes.container_logs-default
meta:
package:
name: kubernetes
version: 1.9.0
name: container-log
parsers:
- container: null
paths:
- /var/log/containers/*af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819.log
processors:
- add_fields:
fields:
id: af311067a62fa5e4d6e5cb4d31e64c1c35d82fe399eb9429cd948d5495496819
image:
name: registry.k8s.io/etcd:3.5.4-0
runtime: containerd
target: container
- add_fields:
fields:
container:
name: etcd
labels:
component: etcd
tier: control-plane
namespace: kube-system
namespace_labels:
kubernetes_io/metadata_name: kube-system
namespace_uid: 03d6fd2f-7279-4db4-9a98-51e50bbe5c62
node:
hostname: kind-control-plane
labels:
beta_kubernetes_io/arch: amd64
beta_kubernetes_io/os: linux
kubernetes_io/arch: amd64
kubernetes_io/hostname: kind-control-plane
kubernetes_io/os: linux
node-role_kubernetes_io/control-plane: ""
node_kubernetes_io/exclude-from-external-load-balancers: ""
name: kind-control-plane
uid: b8d65d6b-61ed-49ef-9770-3b4f40a15a8a
pod:
ip: 172.19.0.2
name: etcd-kind-control-plane
uid: 08970fcf-bb93-487e-b856-02399d81fb29
target: kubernetes
- add_fields:
fields:
cluster:
name: kind
url: kind-control-plane:6443
target: orchestrator
- add_fields:
fields:
dataset: kubernetes.container_logs
namespace: default
type: logs
target: data_stream
- add_fields:
fields:
dataset: kubernetes.container_logs
target: event
- add_fields:
fields:
id: ""
snapshot: false
version: 8.3.0
target: elastic_agent
- add_fields:
fields:
id: ""
target: agent
prospector.scanner.symlinks: true
type: filestream
----
====
Loading