Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/lucene_snapshot' into interval…
Browse files Browse the repository at this point in the history
…s_regexp_range
  • Loading branch information
mayya-sharipova committed Aug 20, 2024
2 parents 616612c + 0f0704e commit d6ac70d
Show file tree
Hide file tree
Showing 146 changed files with 3,559 additions and 1,076 deletions.
2 changes: 1 addition & 1 deletion build-tools-internal/version.properties
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
elasticsearch = 8.16.0
lucene = 9.12.0-snapshot-a9a70fa97cc
lucene = 9.12.0-snapshot-25253a1a016

bundled_jdk_vendor = openjdk
bundled_jdk = 22.0.1+8@c7ec1332f7bb44aeba2eb341ae18aca4
Expand Down
5 changes: 5 additions & 0 deletions docs/changelog/111855.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 111855
summary: "ESQL: Profile more timing information"
area: ES|QL
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/111943.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 111943
summary: Fix synthetic source for empty nested objects
area: Mapping
type: bug
issues:
- 111811
7 changes: 7 additions & 0 deletions docs/changelog/111955.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
pr: 111955
summary: Clean up dangling S3 multipart uploads
area: Snapshot/Restore
type: enhancement
issues:
- 101169
- 44971
6 changes: 6 additions & 0 deletions docs/changelog/111968.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 111968
summary: "ESQL: don't lose the original casting error message"
area: ES|QL
type: bug
issues:
- 111967
15 changes: 15 additions & 0 deletions docs/changelog/111972.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
pr: 111972
summary: Introduce global retention in data stream lifecycle.
area: Data streams
type: feature
issues: []
highlight:
title: Add global retention in data stream lifecycle
body: "Data stream lifecycle now supports configuring retention on a cluster level,\
\ namely global retention. Global retention \nallows us to configure two different\
\ retentions:\n\n- `data_streams.lifecycle.retention.default` is applied to all\
\ data streams managed by the data stream lifecycle that do not have retention\n\
defined on the data stream level.\n- `data_streams.lifecycle.retention.max` is\
\ applied to all data streams managed by the data stream lifecycle and it allows\
\ any data stream \ndata to be deleted after the `max_retention` has passed."
notable: true
6 changes: 6 additions & 0 deletions docs/changelog/111983.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 111983
summary: Avoid losing error message in failure collector
area: ES|QL
type: bug
issues:
- 111894
6 changes: 6 additions & 0 deletions docs/changelog/112005.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 112005
summary: Check for valid `parentDoc` before retrieving its previous
area: Mapping
type: bug
issues:
- 111990
1 change: 1 addition & 0 deletions docs/reference/api-conventions.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -334,6 +334,7 @@ All REST API parameters (both request parameters and JSON body) support
providing boolean "false" as the value `false` and boolean "true" as the
value `true`. All other values will raise an error.

[[api-conventions-number-values]]
[discrete]
=== Number Values

Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions docs/reference/esql/functions/kibana/docs/to_datetime.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions docs/reference/release-notes/8.15.0.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@ Either downgrade to an earlier version, upgrade to 8.15.1, or else follow the
recommendation in the manual to entirely disable swap instead of using the
memory lock feature (issue: {es-issue}111847[#111847])

* The `took` field of the response to the <<docs-bulk>> API is incorrect and may be rather large. Clients which
<<api-conventions-number-values,incorrectly>> assume that this value will be within a particular range (e.g. that it fits into a 32-bit
signed integer) may encounter errors (issue: {es-issue}111854[#111854])

[[breaking-8.15.0]]
[float]
=== Breaking changes
Expand Down
12 changes: 12 additions & 0 deletions docs/reference/settings/data-stream-lifecycle-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,18 @@ These are the settings available for configuring <<data-stream-lifecycle, data s

==== Cluster level settings

[[data-streams-lifecycle-retention-max]]
`data_streams.lifecycle.retention.max`::
(<<dynamic-cluster-setting,Dynamic>>, <<time-units, time unit value>>)
The maximum retention period that will apply to all user data streams managed by the data stream lifecycle. The max retention will also
override the retention of a data stream whose configured retention exceeds the max retention. It should be greater than `10s`.

[[data-streams-lifecycle-retention-default]]
`data_streams.lifecycle.retention.default`::
(<<dynamic-cluster-setting,Dynamic>>, <<time-units, time unit value>>)
The retention period that will apply to all user data streams managed by the data stream lifecycle that do not have retention configured.
It should be greater than `10s` and less or equals than <<data-streams-lifecycle-retention-max, `data_streams.lifecycle.retention.max`>>.

[[data-streams-lifecycle-poll-interval]]
`data_streams.lifecycle.poll_interval`::
(<<dynamic-cluster-setting,Dynamic>>, <<time-units, time unit value>>)
Expand Down
36 changes: 9 additions & 27 deletions docs/reference/snapshot-restore/repository-s3.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -317,6 +317,15 @@ include::repository-shared-settings.asciidoc[]
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html[AWS
DeleteObjects API].

`max_multipart_upload_cleanup_size`::

(<<number,numeric>>) Sets the maximum number of possibly-dangling multipart
uploads to clean up in each batch of snapshot deletions. Defaults to `1000`
which is the maximum number supported by the
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html[AWS
ListMultipartUploads API]. If set to `0`, {es} will not attempt to clean up
dangling multipart uploads.

NOTE: The option of defining client settings in the repository settings as
documented below is considered deprecated, and will be removed in a future
version.
Expand Down Expand Up @@ -492,33 +501,6 @@ by the `elasticsearch` user. By default, {es} runs as user `elasticsearch` using

If the symlink exists, it will be used by default by all S3 repositories that don't have explicit `client` credentials.

==== Cleaning up multi-part uploads

{es} uses S3's multi-part upload process to upload larger blobs to the
repository. The multi-part upload process works by dividing each blob into
smaller parts, uploading each part independently, and then completing the
upload in a separate step. This reduces the amount of data that {es} must
re-send if an upload fails: {es} only needs to re-send the part that failed
rather than starting from the beginning of the whole blob. The storage for each
part is charged independently starting from the time at which the part was
uploaded.

If a multi-part upload cannot be completed then it must be aborted in order to
delete any parts that were successfully uploaded, preventing further storage
charges from accumulating. {es} will automatically abort a multi-part upload on
failure, but sometimes the abort request itself fails. For example, if the
repository becomes inaccessible or the instance on which {es} is running is
terminated abruptly then {es} cannot complete or abort any ongoing uploads.

You must make sure that failed uploads are eventually aborted to avoid
unnecessary storage costs. You can use the
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html[List
multipart uploads API] to list the ongoing uploads and look for any which are
unusually long-running, or you can
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpu-abort-incomplete-mpu-lifecycle-config.html[configure
a bucket lifecycle policy] to automatically abort incomplete uploads once they
reach a certain age.

[[repository-s3-aws-vpc]]
==== AWS VPC bandwidth settings

Expand Down
Loading

0 comments on commit d6ac70d

Please sign in to comment.