Skip to content

Commit

Permalink
[core.logging] Add ops logs to the KP logging system (#88070)
Browse files Browse the repository at this point in the history
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
  • Loading branch information
TinaHeiligers and kibanamachine authored Jan 27, 2021
1 parent 8250b07 commit c6cfdee
Show file tree
Hide file tree
Showing 12 changed files with 482 additions and 6 deletions.
5 changes: 4 additions & 1 deletion packages/kbn-legacy-logging/src/get_logging_config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,10 @@ export function getLoggingConfiguration(config: LegacyLoggingConfig, opsInterval
} else if (config.verbose) {
_.defaults(events, {
log: '*',
ops: '*',
// To avoid duplicate logs, we explicitly disable this in verbose
// mode as it is already provided by the new logging config under
// the `metrics.ops` context.
ops: '!',
request: '*',
response: '*',
error: '*',
Expand Down
6 changes: 4 additions & 2 deletions src/core/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,10 @@ Even though the new validation system provided by the `core` is also based on Jo
rules tailored to our needs (e.g. `byteSize`, `duration` etc.). That means that config values that were previously accepted
by the "legacy" Kibana may be rejected by the `core` now.

Even though `core` has its own logging system it doesn't output log records directly (e.g. to file or terminal), but instead
forward them to the "legacy" Kibana so that they look the same as the rest of the log records throughout Kibana.
### Logging
`core` has its own [logging system](./server/logging/README.md) and will output log records directly (e.g. to file or terminal) when configured. When no
specific configuration is provided, logs are forwarded to the "legacy" Kibana so that they look the same as the rest of the
log records throughout Kibana.

## Core API Review
To provide a stable API for plugin developers, it is important that the Core Public and Server API's are stable and
Expand Down
20 changes: 20 additions & 0 deletions src/core/server/config/deprecation/core_deprecations.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -236,4 +236,24 @@ describe('core deprecations', () => {
).toEqual([`worker-src blob:`]);
});
});

describe('logging.events.ops', () => {
it('warns when ops events are used', () => {
const { messages } = applyCoreDeprecations({
logging: { events: { ops: '*' } },
});
expect(messages).toMatchInlineSnapshot(`
Array [
"\\"logging.events.ops\\" has been deprecated and will be removed in 8.0. To access ops data moving forward, please enable debug logs for the \\"metrics.ops\\" context in your logging configuration.",
]
`);
});

it('does not warn when other events are configured', () => {
const { messages } = applyCoreDeprecations({
logging: { events: { log: '*' } },
});
expect(messages).toEqual([]);
});
});
});
12 changes: 12 additions & 0 deletions src/core/server/config/deprecation/core_deprecations.ts
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,17 @@ const mapManifestServiceUrlDeprecation: ConfigDeprecation = (settings, fromPath,
return settings;
};

const opsLoggingEventDeprecation: ConfigDeprecation = (settings, fromPath, log) => {
if (has(settings, 'logging.events.ops')) {
log(
'"logging.events.ops" has been deprecated and will be removed ' +
'in 8.0. To access ops data moving forward, please enable debug logs for the ' +
'"metrics.ops" context in your logging configuration.'
);
}
return settings;
};

export const coreDeprecationProvider: ConfigDeprecationProvider = ({ rename, unusedFromRoot }) => [
unusedFromRoot('savedObjects.indexCheckTimeout'),
unusedFromRoot('server.xsrf.token'),
Expand Down Expand Up @@ -137,4 +148,5 @@ export const coreDeprecationProvider: ConfigDeprecationProvider = ({ rename, unu
rewriteBasePathDeprecation,
cspRulesDeprecation,
mapManifestServiceUrlDeprecation,
opsLoggingEventDeprecation,
];
24 changes: 23 additions & 1 deletion src/core/server/logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,6 +312,9 @@ logging:
- context: telemetry
level: all
appenders: [json-file-appender]
- context: metrics.ops
level: debug
appenders: [console]
```

Here is what we get with the config above:
Expand All @@ -324,6 +327,7 @@ Here is what we get with the config above:
| server | console, file | fatal |
| optimize | console | error |
| telemetry | json-file-appender | all |
| metrics.ops | console | debug |


The `root` logger has a dedicated configuration node since this context is special and should always exist. By
Expand All @@ -341,7 +345,25 @@ Or disable logging entirely with `off`:
```yaml
logging.root.level: off
```
### Dedicated loggers

The `metrics.ops` logger is configured with `debug` level and will automatically output sample system and process information at a regular interval.
The metrics that are logged are a subset of the data collected and are formatted in the log message as follows:

| Ops formatted log property | Location in metrics service | Log units
| :------------------------- | :-------------------------- | :-------------------------- |
| memory | process.memory.heap.used_in_bytes | [depends on the value](http://numeraljs.com/#format), typically MB or GB |
| uptime | process.uptime_in_millis | HH:mm:ss |
| load | os.load | [ "load for the last 1 min" "load for the last 5 min" "load for the last 15 min"] |
| delay | process.event_loop_delay | ms |

The log interval is the same as the interval at which system and process information is refreshed and is configurable under `ops.interval`:

```yaml
ops.interval: 5000
```

The minimum interval is 100ms and defaults to 5000ms.
## Usage

Usage is very straightforward, one should just get a logger for a specific context and use it to log messages with
Expand Down Expand Up @@ -478,4 +500,4 @@ TBD
| meta | separate property `"meta": {"to": "v8"}` | merged in log record `{... "to": "v8"}` |
| pid | `pid: 12345` | `pid: 12345` |
| type | N/A | `type: log` |
| error | `{ message, name, stack }` | `{ message, name, stack, code, signal }` |
| error | `{ message, name, stack }` | `{ message, name, stack, code, signal }` |
90 changes: 90 additions & 0 deletions src/core/server/logging/ecs.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* and the Server Side Public License, v 1; you may not use this file except in
* compliance with, at your election, the Elastic License or the Server Side
* Public License, v 1.
*/

/**
* Typings for some ECS fields which core uses internally.
* These are not a complete set of ECS typings and should not
* be used externally; the only types included here are ones
* currently used in core.
*
* @internal
*/

export interface EcsOpsMetricsEvent {
/**
* These typings were written as of ECS 1.7.0.
* Don't change this value without checking the rest
* of the types to conform to that ECS version.
*
* https://www.elastic.co/guide/en/ecs/1.7/index.html
*/
ecs: { version: '1.7.0' };

// base fields
['@timestamp']?: string;
labels?: Record<string, unknown>;
message?: string;
tags?: string[];
// other fields
process?: EcsProcessField;
event?: EcsEventField;
}

interface EcsProcessField {
uptime?: number;
}

export interface EcsEventField {
kind?: EcsEventKind;
category?: EcsEventCategory[];
type?: EcsEventType;
}

export enum EcsEventKind {
ALERT = 'alert',
EVENT = 'event',
METRIC = 'metric',
STATE = 'state',
PIPELINE_ERROR = 'pipeline_error',
SIGNAL = 'signal',
}

export enum EcsEventCategory {
AUTHENTICATION = 'authentication',
CONFIGURATION = 'configuration',
DATABASE = 'database',
DRIVER = 'driver',
FILE = 'file',
HOST = 'host',
IAM = 'iam',
INTRUSION_DETECTION = 'intrusion_detection',
MALWARE = 'malware',
NETWORK = 'network',
PACKAGE = 'package',
PROCESS = 'process',
WEB = 'web',
}

export enum EcsEventType {
ACCESS = 'access',
ADMIN = 'admin',
ALLOWED = 'allowed',
CHANGE = 'change',
CONNECTION = 'connection',
CREATION = 'creation',
DELETION = 'deletion',
DENIED = 'denied',
END = 'end',
ERROR = 'error',
GROUP = 'group',
INFO = 'info',
INSTALLATION = 'installation',
PROTOCOL = 'protocol',
START = 'start',
USER = 'user',
}
7 changes: 7 additions & 0 deletions src/core/server/logging/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,13 @@ export {
LogLevelId,
LogLevel,
} from '@kbn/logging';
export {
EcsOpsMetricsEvent,
EcsEventField,
EcsEventKind,
EcsEventCategory,
EcsEventType,
} from './ecs';
export {
config,
LoggingConfigType,
Expand Down
132 changes: 132 additions & 0 deletions src/core/server/metrics/logging/get_ops_metrics_log.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* and the Server Side Public License, v 1; you may not use this file except in
* compliance with, at your election, the Elastic License or the Server Side
* Public License, v 1.
*/

import { OpsMetrics } from '..';
import { getEcsOpsMetricsLog } from './get_ops_metrics_log';

function createBaseOpsMetrics(): OpsMetrics {
return {
collected_at: new Date('2020-01-01 01:00:00'),
process: {
memory: {
heap: { total_in_bytes: 1, used_in_bytes: 1, size_limit: 1 },
resident_set_size_in_bytes: 1,
},
event_loop_delay: 1,
pid: 1,
uptime_in_millis: 1,
},
os: {
platform: 'darwin' as const,
platformRelease: 'test',
load: { '1m': 1, '5m': 1, '15m': 1 },
memory: { total_in_bytes: 1, free_in_bytes: 1, used_in_bytes: 1 },
uptime_in_millis: 1,
},
response_times: { avg_in_millis: 1, max_in_millis: 1 },
requests: { disconnects: 1, total: 1, statusCodes: { '200': 1 } },
concurrent_connections: 1,
};
}

function createMockOpsMetrics(testMetrics: Partial<OpsMetrics>): OpsMetrics {
const base = createBaseOpsMetrics();
return {
...base,
...testMetrics,
};
}
const testMetrics = ({
process: {
memory: { heap: { used_in_bytes: 100 } },
uptime_in_millis: 1500,
event_loop_delay: 50,
},
os: {
load: {
'1m': 10,
'5m': 20,
'15m': 30,
},
},
} as unknown) as Partial<OpsMetrics>;

describe('getEcsOpsMetricsLog', () => {
it('provides correctly formatted message', () => {
const result = getEcsOpsMetricsLog(createMockOpsMetrics(testMetrics));
expect(result.message).toMatchInlineSnapshot(
`"memory: 100.0B uptime: 0:00:01 load: [10.00,20.00,30.00] delay: 50.000"`
);
});

it('correctly formats process uptime', () => {
const logMeta = getEcsOpsMetricsLog(createMockOpsMetrics(testMetrics));
expect(logMeta.process!.uptime).toEqual(1);
});

it('excludes values from the message if unavailable', () => {
const baseMetrics = createBaseOpsMetrics();
const missingMetrics = ({
...baseMetrics,
process: {},
os: {},
} as unknown) as OpsMetrics;
const logMeta = getEcsOpsMetricsLog(missingMetrics);
expect(logMeta.message).toMatchInlineSnapshot(`""`);
});

it('specifies correct ECS version', () => {
const logMeta = getEcsOpsMetricsLog(createBaseOpsMetrics());
expect(logMeta.ecs.version).toBe('1.7.0');
});

it('provides an ECS-compatible response', () => {
const logMeta = getEcsOpsMetricsLog(createBaseOpsMetrics());
expect(logMeta).toMatchInlineSnapshot(`
Object {
"ecs": Object {
"version": "1.7.0",
},
"event": Object {
"category": Array [
"process",
"host",
],
"kind": "metric",
"type": "info",
},
"host": Object {
"os": Object {
"load": Object {
"15m": 1,
"1m": 1,
"5m": 1,
},
},
},
"message": "memory: 1.0B load: [1.00,1.00,1.00] delay: 1.000",
"process": Object {
"eventLoopDelay": 1,
"memory": Object {
"heap": Object {
"usedInBytes": 1,
},
},
"uptime": 0,
},
}
`);
});

it('logs ECS fields in the log meta', () => {
const logMeta = getEcsOpsMetricsLog(createBaseOpsMetrics());
expect(logMeta.event!.kind).toBe('metric');
expect(logMeta.event!.category).toEqual(expect.arrayContaining(['process', 'host']));
expect(logMeta.event!.type).toBe('info');
});
});
Loading

0 comments on commit c6cfdee

Please sign in to comment.