Skip to content

Commit

Permalink
Merge branch 'master' into elastic#62463
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine authored Apr 14, 2020
2 parents ece1c3a + 1f732ad commit 17e6fda
Show file tree
Hide file tree
Showing 438 changed files with 6,912 additions and 4,966 deletions.
1 change: 1 addition & 0 deletions .i18nrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
"src/legacy/core_plugins/management",
"src/plugins/management"
],
"maps_legacy": "src/plugins/maps_legacy",
"indexPatternManagement": "src/plugins/index_pattern_management",
"advancedSettings": "src/plugins/advanced_settings",
"kibana_legacy": "src/plugins/kibana_legacy",
Expand Down
Binary file added docs/images/tutorial-ilm-custom-policy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/tutorial-ilm-delete-rollover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,23 +1,179 @@
[role="xpack"]

[[example-using-index-lifecycle-policy]]
=== Example of using an index lifecycle policy
=== Tutorial: Use {ilm-init} to manage {filebeat} time-based indices

With {ilm} ({ilm-init}), you can create policies that perform actions automatically
on indices as they age and grow. {ilm-init} policies help you to manage
performance, resilience, and retention of your data during its lifecycle. This tutorial shows
you how to use {kib}’s *Index Lifecycle Policies* to modify and create {ilm-init}
policies. You can learn more about all of the actions, benefits, and lifecycle
phases in the {ref}/overview-index-lifecycle-management.html[{ilm-init} overview].


[discrete]
[[example-using-index-lifecycle-policy-scenario]]
==== Scenario

You’re tasked with sending syslog files to an {es} cluster. This
log data has the following data retention guidelines:

* Keep logs on hot data nodes for 30 days
* Roll over to a new index if the size reaches 50GB
* After 30 days:
** Move the logs to warm data nodes
** Set {ref}/glossary.html#glossary-replica-shard[replica shards] to 1
** {ref}/indices-forcemerge.html[Force merge] multiple index segments to free up the space used by deleted documents
* Delete logs after 90 days


[discrete]
[[example-using-index-lifecycle-policy-prerequisites]]
==== Prerequisites

To complete this tutorial, you'll need:

* An {es} cluster with hot and warm nodes configured for shard allocation
awareness. If you’re using {cloud}/ec-getting-started-templates-hot-warm.html[{ess}],
choose the hot-warm architecture deployment template.

+
For a self-managed cluster, add node attributes as described for {ref}/shard-allocation-filtering.html[shard allocation filtering]
to label data nodes as hot or warm. This step is required to migrate shards between
nodes configured with specific hardware for the hot or warm phases.
+
For example, you can set this in your `elasticsearch.yml` for each data node:
+
[source,yaml]
--------------------------------------------------------------------------------
node.attr.data: "warm"
--------------------------------------------------------------------------------

* A server with {filebeat} installed and configured to send logs to the `elasticsearch`
output as described in {filebeat-ref}/filebeat-getting-started.html[Getting Started with {filebeat}].

[discrete]
[[example-using-index-lifecycle-policy-view-fb-ilm-policy]]
==== View the {filebeat} {ilm-init} policy

{filebeat} includes a default {ilm-init} policy that enables rollover. {ilm-init}
is enabled automatically if you’re using the default `filebeat.yml` and index template.

To view the default policy in {kib}, go to *Management > Index Lifecycle Policies*,
search for _filebeat_, and choose the _filebeat-version_ policy.

This policy initiates the rollover action when the index size reaches 50GB or
becomes 30 days old.

[role="screenshot"]
image::images/tutorial-ilm-hotphaserollover-default.png["Default policy"]


[float]
==== Modify the policy

The default policy is enough to prevent the creation of many tiny daily indices.
You can modify the policy to meet more complex requirements.

. Activate the warm phase.

+
. Set either of the following options to control when the index moves to the warm phase:

** Provide a value for *Timing for warm phase*. Setting this to *15* keeps the
indices on hot nodes for a range of 15-45 days, depending on when the initial
rollover occurred.

** Enable *Move to warm phase on rollover*. The index might move to the warm phase
more quickly than intended if it reaches the *Maximum index size* before the
the *Maximum age*.

. In the *Select a node attribute to control shard allocation* dropdown, select
*data:warm(2)* to migrate shards to warm data nodes.

. Change *Number of replicas* to *1*.

. Enable *Force merge data* and set *Number of segments* to *1*.
+
NOTE: When rollover is enabled in the hot phase, action timing in the other phases
is based on the rollover date.

+
[role="screenshot"]
image::images/tutorial-ilm-modify-default-warm-phase-rollover.png["Modify to add warm phase"]

. Activate the delete phase and set *Timing for delete phase* to *90* days.
+
[role="screenshot"]
image::images/tutorial-ilm-delete-rollover.png["Add a delete phase"]

[float]
==== Create a custom policy

If meeting a specific retention time period is most important, you can create a
custom policy. For this option, you will use {filebeat} daily indices without
rollover.

. Create a custom policy in {kib}, go to *Management > Index Lifecycle Policies >
Create Policy*.

. Activate the warm phase and configure it as follows:
+
|===
|*Setting* |*Value*

|Timing for warm phase
|30 days from index creation

|Node attribute
|`data:warm`

|Number of replicas
|1

|Force merge data
|enable

|Number of segments
|1
|===

+
[role="screenshot"]
image::images/tutorial-ilm-custom-policy.png["Modify the custom policy to add a warm phase"]


A common use case for managing index lifecycle policies is when you’re using
{beats-ref}/beats-reference.html[Beats] to continually send time-series data,
such as metrics and log data, to {es}. When you create the Beats packages, an
index template is installed. The template includes a default policy to apply
when new indices are created.
+
. Activate the delete phase and set the timing.
+
|===
|*Setting* |*Value*
|Timing for delete phase
|90
|===

You can edit the policy in {kib}'s *Index Lifecycle Policies*. For example, you might:
+
[role="screenshot"]
image::images/tutorial-ilm-delete-phase-creation.png["Delete phase"]

* Rollover the index when it reaches 50 GB in size or is 30 days old. These
settings are the default for the Beats lifecycle policy. This avoids
having 1000s of tiny indices. When a rollover occurs, a new “hot” index is
created and added to the index alias.
. Configure the index to use the new policy in *{kib} > Management > Index Lifecycle
Policies*

* Move the index into the warm phase, shrink the index down to a single shard,
and force merge to a single segment.
.. Find your {ilm-init} policy.
.. Click the *Actions* link next to your policy name.
.. Choose *Add policy to index template*.
.. Select your {filebeat} index template name from the *Index template* list. For example, `filebeat-7.5.x`.
.. Click *Add Policy* to save the changes.

* After 60 days, move the index into the cold phase and onto less expensive hardware.
+
NOTE: If you initially used the default {filebeat} {ilm-init} policy, you will
see a notice that the template already has a policy associated with it. Confirm
that you want to overwrite that configuration.

* Delete the index after 90 days.
+
+
TIP: When you change the policy associated with the index template, the active
index will continue to use the policy it was associated with at index creation
unless you manually update it. The next new index will use the updated policy.
For more reasons that your {ilm-init} policy changes might be delayed, see
{ref}/update-lifecycle-policy.html#update-lifecycle-policy[Update Lifecycle Policy].
61 changes: 61 additions & 0 deletions src/core/MIGRATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
- [7. Switch to new platform services](#7-switch-to-new-platform-services)
- [8. Migrate to the new plugin system](#8-migrate-to-the-new-plugin-system)
- [Bonus: Tips for complex migration scenarios](#bonus-tips-for-complex-migration-scenarios)
- [Keep Kibana fast](#keep-kibana-fast)
- [Frequently asked questions](#frequently-asked-questions)
- [Is migrating a plugin an all-or-nothing thing?](#is-migrating-a-plugin-an-all-or-nothing-thing)
- [Do plugins need to be converted to TypeScript?](#do-plugins-need-to-be-converted-to-typescript)
Expand Down Expand Up @@ -933,6 +934,66 @@ For a few plugins, some of these steps (such as angular removal) could be a mont

One convention that is useful for this is creating a dedicated `public/np_ready` directory to house the code that is ready to migrate, and gradually move more and more code into it until the rest of your plugin is essentially empty. At that point, you'll be able to copy your `index.ts`, `plugin.ts`, and the contents of `./np_ready` over into your plugin in the new platform, leaving your legacy shim behind. This carries the added benefit of providing a way for us to introduce helpful tooling in the future, such as [custom eslint rules](https://github.com/elastic/kibana/pull/40537), which could be run against that specific directory to ensure your code is ready to migrate.

## Keep Kibana fast
**tl;dr**: Load as much code lazily as possible.
Everyone loves snappy applications with responsive UI and hates spinners. Users deserve the best user experiences regardless of whether they run Kibana locally or in the cloud, regardless of their hardware & environment.
There are 2 main aspects of the perceived speed of an application: loading time and responsiveness to user actions.
New platform loads and bootstraps **all** the plugins whenever a user lands on any page. It means that adding every new application affects overall **loading performance** in the new platform, as plugin code is loaded **eagerly** to initialize the plugin and provide plugin API to dependent plugins.
However, it's usually not necessary that the whole plugin code should be loaded and initialized at once. The plugin could keep on loading code covering API functionality on Kibana bootstrap but load UI related code lazily on-demand, when an application page or management section is mounted.
Always prefer to require UI root components lazily when possible (such as in mount handlers). Even if their size may seem negligible, they are likely using some heavy-weight libraries that will also be removed from the initial plugin bundle, therefore, reducing its size by a significant amount.

```typescript
import { Plugin, CoreSetup, AppMountParameters } from 'src/core/public';
export class MyPlugin implements Plugin<MyPluginSetup> {
setup(core: CoreSetup, plugins: SetupDeps){
core.application.register({
id: 'app',
title: 'My app',
async mount(params: AppMountParameters) {
const { mountApp } = await import('./app/mount_app');
return mountApp(await core.getStartServices(), params);
},
});
plugins.management.sections.getSection('another').registerApp({
id: 'app',
title: 'My app',
order: 1,
async mount(params) {
const { mountManagementSection } = await import('./app/mount_management_section');
return mountManagementSection(coreSetup, params);
},
})
return {
doSomething(){}
}
}
}
```

#### How to understand how big the bundle size of my plugin is?
New platform plugins are distributed as a pre-built with `@kbn/optimizer` package artifacts. It allows us to get rid of the shipping of `optimizer` in the distributable version of Kibana.
Every NP plugin artifact contains all plugin dependencies required to run the plugin, except some stateful dependencies shared across plugin bundles via `@kbn/ui-shared-deps`.
It means that NP plugin artifacts tend to have a bigger size than the legacy platform version.
To understand the current size of your plugin artifact, run `@kbn/optimizer` as
```bash
node scripts/build_kibana_platform_plugins.js --dist --no-examples
```
and check the output in the `target` sub-folder of your plugin folder
```bash
ls -lh plugins/my_plugin/target/public/
# output
# an async chunk loaded on demand
... 262K 0.plugin.js
# eagerly loaded chunk
... 50K my_plugin.plugin.js
```
you might see at least one js bundle - `my_plugin.plugin.js`. This is the only artifact loaded by the platform during bootstrap in the browser. The rule of thumb is to keep its size as small as possible.
Other lazily loaded parts of your plugin present in the same folder as separate chunks under `{number}.plugin.js` names.
If you want to investigate what your plugin bundle consists of you need to run `@kbn/optimizer` with `--profile` flag to get generated [webpack stats file](https://webpack.js.org/api/stats/).
Many OSS tools are allowing you to analyze generated stats file
- [an official tool](http://webpack.github.io/analyse/#modules) from webpack authors
- [webpack-visualizer](https://chrisbateman.github.io/webpack-visualizer/)

## Frequently asked questions

### Is migrating a plugin an all-or-nothing thing?
Expand Down
6 changes: 3 additions & 3 deletions src/core/public/mocks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ export { overlayServiceMock } from './overlays/overlay_service.mock';
export { uiSettingsServiceMock } from './ui_settings/ui_settings_service.mock';
export { savedObjectsServiceMock } from './saved_objects/saved_objects_service.mock';
export { scopedHistoryMock } from './application/scoped_history.mock';
export { applicationServiceMock } from './application/application_service.mock';

function createCoreSetupMock({
basePath = '',
Expand All @@ -62,9 +63,8 @@ function createCoreSetupMock({
application: applicationServiceMock.createSetupContract(),
context: contextServiceMock.createSetupContract(),
fatalErrors: fatalErrorsServiceMock.createSetupContract(),
getStartServices: jest.fn<Promise<[ReturnType<typeof createCoreStartMock>, object, any]>, []>(
() =>
Promise.resolve([createCoreStartMock({ basePath }), pluginStartDeps, pluginStartContract])
getStartServices: jest.fn<Promise<[ReturnType<typeof createCoreStartMock>, any, any]>, []>(() =>
Promise.resolve([createCoreStartMock({ basePath }), pluginStartDeps, pluginStartContract])
),
http: httpServiceMock.createSetupContract({ basePath }),
notifications: notificationServiceMock.createSetupContract(),
Expand Down
1 change: 0 additions & 1 deletion src/core/server/saved_objects/service/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ export interface SavedObjectsLegacyService {
getScopedSavedObjectsClient: SavedObjectsClientProvider['getClient'];
SavedObjectsClient: typeof SavedObjectsClient;
types: string[];
importAndExportableTypes: string[];
schema: SavedObjectsSchema;
getSavedObjectsRepository(...rest: any[]): any;
importExport: {
Expand Down
2 changes: 0 additions & 2 deletions src/core/server/server.api.md
Original file line number Diff line number Diff line change
Expand Up @@ -2084,8 +2084,6 @@ export interface SavedObjectsLegacyService {
// (undocumented)
getScopedSavedObjectsClient: SavedObjectsClientProvider['getClient'];
// (undocumented)
importAndExportableTypes: string[];
// (undocumented)
importExport: {
objectLimit: number;
importSavedObjects(options: SavedObjectsImportOptions): Promise<SavedObjectsImportResponse>;
Expand Down
3 changes: 0 additions & 3 deletions src/legacy/core_plugins/kibana/inject_vars.js
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,7 @@
export function injectVars(server) {
const serverConfig = server.config();

const { importAndExportableTypes } = server.savedObjects;

return {
importAndExportableTypes,
autocompleteTerminateAfter: serverConfig.get('kibana.autocompleteTerminateAfter'),
autocompleteTimeout: serverConfig.get('kibana.autocompleteTimeout'),
};
Expand Down
2 changes: 1 addition & 1 deletion src/legacy/core_plugins/kibana/public/index.scss
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
@import './visualize/index';
// Has to come after visualize because of some
// bad cascading in the Editor layout
@import 'src/legacy/ui/public/vis/index';
@import '../../../../plugins/maps_legacy/public/index';

// Home styles
@import '../../../../plugins/home/public/application/index';
Expand Down
2 changes: 1 addition & 1 deletion src/legacy/core_plugins/kibana/public/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@
export {
ProcessedImportResponse,
processImportResponse,
} from './management/sections/objects/lib/process_import_response';
} from '../../../../plugins/saved_objects_management/public/lib';
Loading

0 comments on commit 17e6fda

Please sign in to comment.