Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add system tests to kibana package #4444

Merged
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 2 additions & 54 deletions packages/kibana/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,67 +31,15 @@ UI in Kibana. To enable this usage, set `xpack.enabled: true` on the package con

Stats data stream uses the stats endpoint of Kibana, which is available in 6.4 by default.

**Exported fields**

| Field | Description | Type |
|---|---|---|
| @timestamp | Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. | date |
| data_stream.dataset | Data stream dataset. | constant_keyword |
| data_stream.namespace | Data stream namespace. | constant_keyword |
| data_stream.type | Data stream type. | constant_keyword |
| kibana.stats.concurrent_connections | Number of client connections made to the server. Note that browsers can send multiple simultaneous connections to request multiple server assets at once, and they can re-use established connections. | long |
| kibana.stats.host.name | Kibana instance hostname | keyword |
| kibana.stats.index | Name of Kibana's internal index | keyword |
| kibana.stats.kibana.status | | keyword |
| kibana.stats.name | Kibana instance name | keyword |
| kibana.stats.os.distro | | keyword |
| kibana.stats.os.distroRelease | | keyword |
| kibana.stats.os.load.15m | | half_float |
| kibana.stats.os.load.1m | | half_float |
| kibana.stats.os.load.5m | | half_float |
| kibana.stats.os.memory.free_in_bytes | | long |
| kibana.stats.os.memory.total_in_bytes | | long |
| kibana.stats.os.memory.used_in_bytes | | long |
| kibana.stats.os.platform | | keyword |
| kibana.stats.os.platformRelease | | keyword |
| kibana.stats.process.event_loop_delay.ms | Event loop delay in milliseconds | scaled_float |
| kibana.stats.process.memory.heap.size_limit.bytes | Max. old space size allocated to Node.js process, in bytes | long |
| kibana.stats.process.memory.heap.total.bytes | Total heap allocated to process in bytes | long |
| kibana.stats.process.memory.heap.uptime.ms | Uptime of process in milliseconds | long |
| kibana.stats.process.memory.heap.used.bytes | Heap used by process in bytes | long |
| kibana.stats.process.memory.resident_set_size.bytes | | long |
| kibana.stats.process.uptime.ms | | long |
| kibana.stats.request.disconnects | Number of requests that were disconnected | long |
| kibana.stats.request.total | Total number of requests | long |
| kibana.stats.response_time.avg.ms | Average response time in milliseconds | long |
| kibana.stats.response_time.max.ms | Maximum response time in milliseconds | long |
| kibana.stats.snapshot | Whether the Kibana build is a snapshot build | boolean |
| kibana.stats.status | Kibana instance's health status | keyword |
| kibana.stats.usage.index | | keyword |
| service.id | Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. | keyword |
| service.version | Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. | keyword |
{{fields "stats"}}

{{event "stats"}}

### Status

This status endpoint is available in 6.0 by default and can be enabled in Kibana >= 5.4 with the config option `status.v6ApiFormat: true`.

**Exported fields**

| Field | Description | Type |
|---|---|---|
| @timestamp | Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. | date |
| data_stream.dataset | Data stream dataset. | constant_keyword |
| data_stream.namespace | Data stream namespace. | constant_keyword |
| data_stream.type | Data stream type. | constant_keyword |
| kibana.status.metrics.concurrent_connections | Current concurrent connections. | long |
| kibana.status.metrics.requests.disconnects | Total number of disconnected connections. | long |
| kibana.status.metrics.requests.total | Total number of connections. | long |
| kibana.status.name | Kibana instance name. | keyword |
| kibana.status.status.overall.state | Kibana overall state. | keyword |
| service.id | Unique identifier of the running service. If the service is comprised of many nodes, the `service.id` should be the same for all nodes. This id should uniquely identify the service. This makes it possible to correlate logs and metrics for one specific service, no matter which particular node emitted the event. Note that if you need to see the events from one specific host of the service, you should filter on that `host.name` or `host.id` instead. | keyword |
| service.version | Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. | keyword |
{{fields "status"}}

{{event "status"}}

Expand Down
2 changes: 0 additions & 2 deletions packages/kibana/_dev/deploy/docker/.env
Original file line number Diff line number Diff line change
@@ -1,3 +1 @@
ELASTIC_PASSWORD=changeme
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.env doesn't work in the CI env

ELASTIC_VERSION=8.5.0-SNAPSHOT
KIBANA_PASSWORD=changeme
4 changes: 2 additions & 2 deletions packages/kibana/_dev/deploy/docker/config/kibana.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ elasticsearch.ssl.verificationMode: "none"
xpack.security.audit.enabled: true
xpack.security.audit.appender:
type: rolling-file
fileName: ./logs/audit.log
fileName: /usr/share/kibana/logs/audit.log
policy:
type: time-interval
interval: 24h
Expand All @@ -25,6 +25,6 @@ logging:
appenders:
file:
type: file
fileName: ./logs/kibana.log
fileName: /usr/share/kibana/logs/kibana.log
layout:
type: json
40 changes: 26 additions & 14 deletions packages/kibana/_dev/deploy/docker/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,50 +1,62 @@
version: "2.3"
services:
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}"
image: "docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION:-8.5.0-SNAPSHOT}"
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- ELASTIC_PASSWORD
- "ELASTIC_PASSWORD=changeme"
ports:
- "127.0.0.1:9201:9200"
volumes:
- "./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
healthcheck:
test: curl -sfo /dev/null -u elastic:${ELASTIC_PASSWORD} localhost:9200 || exit 1
test: curl -sfo /dev/null -u elastic:changeme http://localhost:9200 || exit 1
retries: 300
interval: 1s
setup:
image: "alpine/curl:latest"
user: root
environment:
- "ES_SERVICE_HOST=http://elasticsearch:9200"
- ELASTIC_PASSWORD
- KIBANA_PASSWORD
- "ELASTIC_PASSWORD=changeme"
- "KIBANA_PASSWORD=changeme"
command: ["/bin/sh", "./setup.sh"]
volumes:
- ./scripts/setup.sh:/setup.sh
- "./scripts/setup.sh:/setup.sh"
kibana:
image: "docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}"
image: "docker.elastic.co/kibana/kibana:${ELASTIC_VERSION:-8.5.0-SNAPSHOT}"
user: "1001:0"
group_add:
- "0"
depends_on:
elasticsearch:
condition: service_healthy
volumes:
- ./config/kibana.yml:/usr/share/kibana/config/kibana.yml
- ${SERVICE_LOGS_DIR}:/usr/share/kibana/logs
- "./config/kibana.yml:/usr/share/kibana/config/kibana.yml"
- "kbn_logs:/usr/share/kibana/logs"
ports:
- "127.0.0.1:5602:5601"
healthcheck:
test: curl -sfo /dev/null localhost:5601 || exit 1
test: curl -sfo /dev/null http://localhost:5601 || exit 1
retries: 600
interval: 1s
log_generation:
image: "alpine/curl:latest"
user: root
depends_on:
kibana:
condition: service_healthy
environment:
- "KIBANA_SERVICE_HOST=http://kibana:5601"
- ELASTIC_PASSWORD
- KIBANA_PASSWORD
command: ["/bin/sh", "./generate-audit-logs.sh"]
- "ELASTIC_PASSWORD=changeme"
- "KIBANA_PASSWORD=changeme"
command: ["/bin/sh", "./generate-logs.sh"]
volumes:
- ./scripts/generate-audit-logs.sh:/generate-audit-logs.sh
- "./scripts/generate-logs.sh:/generate-logs.sh"
- "${SERVICE_LOGS_DIR}:/var/log"
- "kbn_logs:/kbn_logs"
volumes:
Copy link
Contributor Author

@crespocarlos crespocarlos Oct 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since Kibana container is not started with root user, we use a in memory mount. With this mount, log_generation container, which is logged in with root, is able to get the files generated by kibana and copy them to ${SERVICE_LOGS_DIR}

kbn_logs:
driver_opts:
type: tmpfs
device: tmpfs

This file was deleted.

53 changes: 53 additions & 0 deletions packages/kibana/_dev/deploy/docker/scripts/generate-logs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#!/bin/sh

# Makes requests to kibana API so that audit logs can be generated
set -e

# Copy the log files content from this container to /var/log/ which is a bind mounted to ${SERVICE_LOGS_DIR}
# This sh must be executed by a root user in order to have permission to write in the ${SERVICE_LOGS_DIR} folder
copy_log_files () {
for f in /kbn_logs/*;
do
echo "Copy ${f##*/} file..."

if [[ ! -e /var/log/${f##*/} ]]; then
touch /var/log/${f##*/}
fi

## appends only new lines
comm -23 "$f" /var/log/${f##*/} >> /var/log/${f##*/}
done
}

attempt_counter=0
max_attempts=6

until curl -s --request GET \
--url $KIBANA_SERVICE_HOST/login \
--user "elastic:$KIBANA_PASSWORD" \
--header 'Content-Type: application/json'
do

if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached"
exit 1
fi

printf '.'
attempt_counter=$(($attempt_counter+1))
sleep 10
done

while true
do
curl -s --request GET \
--url $KIBANA_SERVICE_HOST/api/features \
--user "elastic:$KIBANA_PASSWORD" \
--header 'Content-Type: application/json'

echo "Audit log created"

copy_log_files

sleep 10
done
24 changes: 17 additions & 7 deletions packages/kibana/_dev/deploy/docker/scripts/setup.sh
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,12 +1,22 @@
#!/bin/sh

#Sets a password for kibana_system user
attempt_counter=0
max_attempts=6

set -e

until curl --request POST $ES_SERVICE_HOST/_security/user/kibana_system/_password \
--user elastic:$ELASTIC_PASSWORD \
until curl -s --request POST $ES_SERVICE_HOST/_security/user/kibana_system/_password \
--user "elastic:$ELASTIC_PASSWORD" \
--header 'Content-Type: application/json' \
--data "{\"password\":\"$KIBANA_PASSWORD\"}"
do sleep 10;
done;
--data "{\"password\":\"$KIBANA_PASSWORD\"}"
do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached"
exit 1
fi

printf '.'
attempt_counter=$(($attempt_counter+1))
sleep 10
done

echo "Setup complete"
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
input: logfile
data_stream:
vars:
paths:
- "{{SERVICE_LOGS_DIR}}/audit.log"
6 changes: 6 additions & 0 deletions packages/kibana/data_stream/audit/fields/base-fields.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,9 @@
- name: "@timestamp"
type: date
description: Event timestamp.
- name: log.offset
type: long
description: The file offset the reported line starts at.
- name: input.type
type: keyword
description: The input type from which the event was generated. This field is set to the value specified for the type option in the input section of the Filebeat config file.
6 changes: 6 additions & 0 deletions packages/kibana/data_stream/audit/fields/ecs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
external: ecs
- name: http.request.method
external: ecs
- name: log.file.path
external: ecs
- name: log.level
external: ecs
- name: log.logger
Expand All @@ -22,6 +24,8 @@
external: ecs
- name: process.pid
external: ecs
- name: related.user
external: ecs
- name: service.node.roles
external: ecs
- name: trace.id
Expand All @@ -30,6 +34,8 @@
external: ecs
- name: url.domain
external: ecs
- name: url.original
external: ecs
- name: url.path
external: ecs
- name: url.port
Expand Down
Loading