Skip to content

Commit

Permalink
Fix mini toc in level 3 topics (#2344)
Browse files Browse the repository at this point in the history
* Fix mini toc

* Add more changes

* Update docs/en/ingest-management/data-streams.asciidoc

Co-authored-by: Brandon Morelli <bmorelli25@gmail.com>

Co-authored-by: Brandon Morelli <bmorelli25@gmail.com>
  • Loading branch information
dedemorton and bmorelli25 authored Nov 15, 2022
1 parent 7d99423 commit c6ec7c0
Show file tree
Hide file tree
Showing 13 changed files with 80 additions and 75 deletions.
38 changes: 19 additions & 19 deletions docs/en/ingest-management/data-streams.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ makes sense to your use case or company.

[discrete]
[[data-streams-naming-scheme]]
== Data stream naming scheme
= Data stream naming scheme

{agent} uses the Elastic data stream naming scheme to name data streams.
The naming scheme splits data into different streams based on the following components:
Expand Down Expand Up @@ -78,14 +78,14 @@ All of the juicy details are available in {ref}/data-streams.html[{es} Data stre

[discrete]
[[data-streams-data-view]]
== {data-sources-cap}
= {data-sources-cap}

When searching your data in {kib}, you can use a {kibana-ref}/data-views.html[{data-source}]
to search across all or some of your data streams.

[discrete]
[[data-streams-index-templates]]
== Index templates
= Index templates

An index template is a way to tell {es} how to configure an index when it is created.
For data streams, the index template configures the stream's backing indices as they are created.
Expand All @@ -96,7 +96,7 @@ These templates are loaded when the integration is installed, and are used to co

[discrete]
[[data-streams-ilm]]
== {ilm} ({ilm-init})
= Index lifecycle management ({ilm-init})

Use the {ref}/index-lifecycle-management.html[index lifecycle
management] ({ilm-init}) feature in {es} to manage your {agent} data stream indices as they age.
Expand All @@ -112,7 +112,7 @@ Want to customize your index lifecycle management? See <<data-streams-ilm-tutori

[discrete]
[[data-streams-pipelines]]
== Ingest pipelines
= Ingest pipelines

{agent} integration data streams ship with a default {ref}/ingest.html[ingest pipeline]
that preprocesses and enriches data before indexing.
Expand Down Expand Up @@ -141,7 +141,7 @@ Specifically, apply the built-in `90-days-default` {ilm-init} policy so that dat

[discrete]
[[data-streams-ilm-one]]
=== Step 1: View data streams
== Step 1: View data streams

The **Data Streams** view in {kib} shows you the data streams,
index templates, and {ilm-init} policies associated with a given integration.
Expand All @@ -156,7 +156,7 @@ image::images/data-stream-info.png[Data streams info]

[discrete]
[[data-streams-ilm-two]]
=== Step 2: Create a component template
== Step 2: Create a component template

For your changes to continue to be applied in future versions,
you must put all custom index settings into a component template.
Expand Down Expand Up @@ -197,7 +197,7 @@ image::images/create-component-template.png[Create component template]

[discrete]
[[data-streams-ilm-three]]
=== Step 3: Clone and modify the existing index template
== Step 3: Clone and modify the existing index template

Now that you've created a component template,
you need to create an index template to apply the changes to the correct data stream.
Expand All @@ -223,7 +223,7 @@ image::images/create-index-template.png[Create index template]

[discrete]
[[data-streams-ilm-four]]
=== Step 4: Roll over the data stream (optional)
== Step 4: Roll over the data stream (optional)

To confirm that the data stream is now using the new index template and {ilm-init} policy,
you can either repeat <<data-streams-ilm-one,step one>>, or navigate to **{dev-tools-app}** and run the following:
Expand Down Expand Up @@ -274,7 +274,7 @@ like adding fields, obfuscate sensitive information, and more.

[discrete]
[[data-streams-pipeline-one]]
=== Step 1: Create a custom ingest pipeline
== Step 1: Create a custom ingest pipeline

Create a custom ingest pipeline that will be called by the default integration pipeline.
In this tutorial, we'll create a pipeline that adds a new field to our documents.
Expand All @@ -297,13 +297,13 @@ The {ref}/set-processor.html[Set processor] sets a document field and associates

[discrete]
[[data-streams-pipeline-two]]
=== Step 2: Apply your ingest pipeline
== Step 2: Apply your ingest pipeline

Add a custom pipeline to an integration by calling it from the default ingest pipeline.
The custom pipeline will run after the default pipeline but before the final pipeline.

[discrete]
==== Edit integration
=== Edit integration

Add a custom pipeline to an integration from the **Edit integration** workflow.
The integration must already be configured and installed before a custom pipeline can be added.
Expand All @@ -315,7 +315,7 @@ To enter this workflow, do the following:
. Select **Actions** -> **Edit integration**

[discrete]
==== Select a data stream
=== Select a data stream

Most integrations write to multiple data streams.
You'll need to add the custom pipeline to each data stream individually.
Expand All @@ -328,7 +328,7 @@ For this tutorial, find the data stream configuration titled, **Collect metrics
This will take you to the **Create pipeline** workflow in **Stack management**.

[discrete]
==== Add the pipeline
=== Add the pipeline

Add the pipeline you created in step one.

Expand All @@ -341,7 +341,7 @@ Add the pipeline you created in step one.
. Click **Create pipeline** to return to the **Edit integration** page.

[discrete]
==== Roll over the data stream (optional)
=== Roll over the data stream (optional)

For pipeline changes to take effect immediately, you must roll over the data stream.
If you do not, the changes will not take effect until the next scheduled roll over.
Expand All @@ -356,13 +356,13 @@ The name follows the pattern `<type>-<dataset>@custom`:
* Custom ingest pipeline designation: `@custom`

[discrete]
==== Repeat
=== Repeat

Add the custom ingest pipeline to any other data streams you wish to update.

[discrete]
[[data-streams-pipeline-three]]
=== Step 3: Test the ingest pipeline (optional)
== Step 3: Test the ingest pipeline (optional)

Allow time for new data to be ingested before testing your pipeline.
In a new window, open {kib} and navigate to **{kib} Dev tools**.
Expand Down Expand Up @@ -390,7 +390,7 @@ If your custom pipeline is working correctly, this query will return at least on

[discrete]
[[data-streams-pipeline-four]]
=== Step 4: Add custom mappings
== Step 4: Add custom mappings

Now that a new field is being set in your {es} documents, you'll want to assign a new mapping for that field.
Use the `@custom` component template to apply custom mappings to an integration data stream.
Expand All @@ -412,7 +412,7 @@ In the **Edit integration** workflow, do the following:

[discrete]
[[data-streams-pipeline-five]]
=== Step 5: Test the custom mappings (optional)
== Step 5: Test the custom mappings (optional)

Allow time for new data to be ingested before testing your mappings.
In a new window, open {kib} and navigate to **{kib} Dev tools**.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Variables on this page are grouped by action type:

[discrete]
[[env-common-vars]]
== Common variables
= Common variables

// forces a unique ID so that settings can be included multiple times on the same page
:type: common
Expand Down Expand Up @@ -52,7 +52,7 @@ include::shared-env.asciidoc[tag=kibana-ca]

[discrete]
[[env-prepare-kibana-for-fleet]]
== Prepare {kib} for {fleet}
= Prepare {kib} for {fleet}

// forces a unique ID so that settings can be included multiple times on the same page
:type: fleet-kib
Expand All @@ -77,7 +77,7 @@ include::shared-env.asciidoc[tag=kibana-fleet-ca]

[discrete]
[[env-bootstrap-fleet-server]]
== Bootstrap {fleet-server}
= Bootstrap {fleet-server}

// forces a unique ID so that settings can be included multiple times on the same page
:type: bootstrap-fleet
Expand Down Expand Up @@ -115,7 +115,7 @@ include::shared-env.asciidoc[tag=fleet-server-insecure-http]

[discrete]
[[env-enroll-agent]]
== Enroll {agent}
= Enroll {agent}

// forces a unique ID so that settings can be included multiple times on the same page
:type: enroll
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Provides inventory information from Kubernetes.


[discrete]
== Provider configuration
= Provider configuration

[source,yaml]
----
Expand Down Expand Up @@ -87,7 +87,7 @@ Example:


[discrete]
== Provider for Pod resources
= Provider for Pod resources

The available keys are:

Expand Down Expand Up @@ -192,7 +192,7 @@ These are the fields available within config templating. The `kubernetes.*` fiel
Note that not all of these fields are available by default and special configuration options
are needed in order to include them.

Fox example, if the Kubernetes provider provides the following inventory:
For example, if the Kubernetes provider provides the following inventory:

[source,json]
----
Expand Down Expand Up @@ -223,7 +223,7 @@ Fox example, if the Kubernetes provider provides the following inventory:
In addition, the Kubernetes metadata are being added to each event by default.

[discrete]
== Provider for Node resources
= Provider for Node resources

[source,yaml]
----
Expand Down Expand Up @@ -268,7 +268,7 @@ The available keys are:
|===

[discrete]
== Provider for Service resources
= Provider for Service resources

[source,yaml]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The available key is:
|===

[discrete]
== Enabling configurations only when on leadership
= Enabling configurations only when on leadership

Use conditions based on the `kubernetes_leaderelection.leader` key to leverage the leaderelection provider and enable specific inputs only when the {agent} holds the leadership lock.
The below example enables the `state_container`
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[discrete]
[[conditions]]
== Conditions
= Conditions

A condition is a boolean expression that you can specify in your agent policy
to control whether a configuration is applied to the running {agent}. You can
Expand Down Expand Up @@ -49,7 +49,7 @@ inputs:

[discrete]
[[condition-syntax]]
=== Condition syntax
== Condition syntax

The conditions supported by {agent} are based on {ref}/eql-syntax.html[EQL]'s
boolean syntax, but add support for variables from providers and functions to
Expand Down Expand Up @@ -81,7 +81,7 @@ manipulate the values.

[discrete]
[[condition-examples]]
=== Condition examples
== Condition examples

Run only when a specific label is included.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This command allows environment variables to configure all properties, and runs

[discrete]
[[agent-in-container-pull]]
== Pull the image
= Pull the image

There are two images for {agent}, *elastic-agent* and *elastic-agent-complete*. The *elastic-agent* image contains all the binaries for running {beats}, while the *elastic-agent-complete* image contains these binaries plus additional dependencies to run browser monitors through Elastic Synthetics. Refer to {observability-guide}/uptime-set-up.html[Synthetic monitoring via {agent} and {fleet}] for more information.

Expand All @@ -35,7 +35,7 @@ docker pull docker.elastic.co/beats/elastic-agent-complete:{version}

[discrete]
[[agent-in-container-command]]
== {agent} container command
= {agent} container command

The {agent} container command offers a wide variety of options.
To see the full list, run:
Expand All @@ -47,7 +47,7 @@ docker run --rm docker.elastic.co/beats/elastic-agent:{version} elastic-agent co

[discrete]
[[agent-in-container-cloud]]
== {ecloud} example
= {ecloud} example

The easiest way to get started is by using an Elastic cluster running on {ecloud}.

Expand All @@ -74,7 +74,7 @@ Refer to <<agent-environment-variables>> for all available options.

[discrete]
[[agent-in-container-self]]
== Self-managed example
= Self-managed example

If you're running a self-managed cluster and want to run your own {fleet-server}, run the following command, which will spin up {agent} and {fleet-server} in a container:

Expand Down Expand Up @@ -102,7 +102,7 @@ NOTE: Replace `docker.elastic.co/beats/elastic-agent` with `docker.elastic.co/be

[discrete]
[[agent-in-container-docker]]
== Docker compose example
= Docker compose example

{agent} can be run in docker-compose.
The example below shows how to enroll an {agent}:
Expand Down Expand Up @@ -138,7 +138,7 @@ Refer to <<agent-environment-variables>> for all available options.

[discrete]
[[agent-in-container-docker-logs]]
== Logs
= Logs

As a container supports only a single version of {agent},
logs and state are stored a bit differently than when running an {agent} outside of a container.
Expand All @@ -158,7 +158,7 @@ Check the fleet-server subprocess logs for more information.

[discrete]
[[agent-in-container-debug]]
== Debugging
= Debugging

A monitoring endpoint can be enabled to expose resource usage and event processing data.
The endpoint is compatible with {agent}s running in both {fleet} mode and Standalone mode.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[discrete]
[[variable-substitution]]
== Variable substitution
= Variable substitution

The syntax for variable substitution is `${var}`, where `var` is the name of a
variable defined by a provider. A _provider_ defines key/value pairs that are
Expand Down Expand Up @@ -68,7 +68,7 @@ inputs:
----

[discrete]
=== Alternative variables and constants
= Alternative variables and constants

Variable substitution can also define alternative variables or a constant.

Expand Down
Loading

0 comments on commit c6ec7c0

Please sign in to comment.