Skip to content

Commit

Permalink
Merge branch 'master' into async-errors
Browse files Browse the repository at this point in the history
* master:
  Add a note to the docs that _cat api `help` option cannot be used if an optional url param is used (elastic#28686)
  Lift error finding utility to exceptions helpers
  Change "tweet" type to "_doc" (elastic#28690)
  [Docs] Add missing word in nested.asciidoc (elastic#28507)
  Simplify the Translog constructor by always expecting an existing translog (elastic#28676)
  Upgrade t-digest to 3.2 (elastic#28295) (elastic#28305)
  Add comment explaining lazy declared versions
  • Loading branch information
jasontedor committed Feb 15, 2018
2 parents 842f331 + 658ca5e commit 79e3c07
Show file tree
Hide file tree
Showing 23 changed files with 470 additions and 404 deletions.
24 changes: 12 additions & 12 deletions docs/reference/aggregations/metrics/percentile-aggregation.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,13 @@ percentiles: `[ 1, 5, 25, 50, 75, 95, 99 ]`. The response will look like this:
"aggregations": {
"load_time_outlier": {
"values" : {
"1.0": 9.9,
"5.0": 29.500000000000004,
"25.0": 167.5,
"1.0": 5.0,
"5.0": 25.0,
"25.0": 165.0,
"50.0": 445.0,
"75.0": 722.5,
"95.0": 940.5,
"99.0": 980.1000000000001
"75.0": 725.0,
"95.0": 945.0,
"99.0": 985.0
}
}
}
Expand Down Expand Up @@ -129,31 +129,31 @@ Response:
"values": [
{
"key": 1.0,
"value": 9.9
"value": 5.0
},
{
"key": 5.0,
"value": 29.500000000000004
"value": 25.0
},
{
"key": 25.0,
"value": 167.5
"value": 165.0
},
{
"key": 50.0,
"value": 445.0
},
{
"key": 75.0,
"value": 722.5
"value": 725.0
},
{
"key": 95.0,
"value": 940.5
"value": 945.0
},
{
"key": 99.0,
"value": 980.1000000000001
"value": 985.0
}
]
}
Expand Down
7 changes: 6 additions & 1 deletion docs/reference/cat.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ GET /_cat/master?help
--------------------------------------------------
// CONSOLE

Might respond respond with:
Might respond with:

[source,txt]
--------------------------------------------------
Expand All @@ -66,6 +66,11 @@ node | n | node name
--------------------------------------------------
// TESTRESPONSE[s/[|]/[|]/ _cat]

NOTE: `help` is not supported if any optional url parameter is used.
For example `GET _cat/shards/twitter?help` or `GET _cat/indices/twi*?help`
results in an error. Use `GET _cat/shards?help` or `GET _cat/indices?help`
instead.

[float]
[[headers]]
=== Headers
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/docs/get.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

The get API allows to get a typed JSON document from the index based on
its id. The following example gets a JSON document from an index called
twitter, under a type called tweet, with id valued 0:
twitter, under a type called _doc, with id valued 0:

[source,js]
--------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/indices/flush.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ which returns something similar to:
"translog_uuid" : "hnOG3xFcTDeoI_kvvvOdNA",
"history_uuid" : "XP7KDJGiS1a2fHYiFL5TXQ",
"local_checkpoint" : "-1",
"translog_generation" : "1",
"translog_generation" : "2",
"max_seq_no" : "-1",
"sync_id" : "AVvFY-071siAOuFGEO9P", <1>
"max_unsafe_auto_id_timestamp" : "-1"
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/mapping/types/nested.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ The following parameters are accepted by `nested` fields:
Because nested documents are indexed as separate documents, they can only be
accessed within the scope of the `nested` query, the
`nested`/`reverse_nested`, or <<nested-inner-hits,nested inner hits>>.
`nested`/`reverse_nested` aggregations, or <<nested-inner-hits,nested inner hits>>.
For instance, if a string field within a nested document has
<<index-options,`index_options`>> set to `offsets` to allow use of the postings
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,12 +38,9 @@
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
import java.util.Locale;
import java.util.Optional;
import java.util.Queue;
import java.util.concurrent.atomic.AtomicBoolean;

public class Netty4Utils {
Expand Down Expand Up @@ -171,7 +168,8 @@ public static void closeChannels(final Collection<Channel> channels) throws IOEx
* @param cause the throwable to test
*/
public static void maybeDie(final Throwable cause) {
final Optional<Error> maybeError = maybeError(cause);
final Logger logger = ESLoggerFactory.getLogger(Netty4Utils.class);
final Optional<Error> maybeError = ExceptionsHelper.maybeError(cause, logger);
if (maybeError.isPresent()) {
/*
* Here be dragons. We want to rethrow this so that it bubbles up to the uncaught exception handler. Yet, Netty wraps too many
Expand All @@ -182,7 +180,6 @@ public static void maybeDie(final Throwable cause) {
try {
// try to log the current stack trace
final String formatted = ExceptionsHelper.formatStackTrace(Thread.currentThread().getStackTrace());
final Logger logger = ESLoggerFactory.getLogger(Netty4Utils.class);
logger.error("fatal error on the network layer\n{}", formatted);
} finally {
new Thread(
Expand All @@ -194,40 +191,4 @@ public static void maybeDie(final Throwable cause) {
}
}

static final int MAX_ITERATIONS = 1024;

/**
* Unwrap the specified throwable looking for any suppressed errors or errors as a root cause of the specified throwable.
*
* @param cause the root throwable
*
* @return an optional error if one is found suppressed or a root cause in the tree rooted at the specified throwable
*/
static Optional<Error> maybeError(final Throwable cause) {
// early terminate if the cause is already an error
if (cause instanceof Error) {
return Optional.of((Error) cause);
}

final Queue<Throwable> queue = new LinkedList<>();
queue.add(cause);
int iterations = 0;
while (!queue.isEmpty()) {
iterations++;
if (iterations > MAX_ITERATIONS) {
ESLoggerFactory.getLogger(Netty4Utils.class).warn("giving up looking for fatal errors on the network layer", cause);
break;
}
final Throwable current = queue.remove();
if (current instanceof Error) {
return Optional.of((Error) current);
}
Collections.addAll(queue, current.getSuppressed());
if (current.getCause() != null) {
queue.add(current.getCause());
}
}
return Optional.empty();
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@
import io.netty.buffer.ByteBuf;
import io.netty.buffer.CompositeByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.handler.codec.DecoderException;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.bytes.AbstractBytesReferenceTestCase;
import org.elasticsearch.common.bytes.BytesArray;
Expand All @@ -33,9 +32,6 @@
import org.elasticsearch.test.ESTestCase;

import java.io.IOException;
import java.util.Optional;

import static org.hamcrest.CoreMatchers.equalTo;

public class Netty4UtilsTests extends ESTestCase {

Expand Down Expand Up @@ -79,60 +75,6 @@ public void testToChannelBuffer() throws IOException {
assertArrayEquals(BytesReference.toBytes(ref), BytesReference.toBytes(bytesReference));
}

public void testMaybeError() {
final Error outOfMemoryError = new OutOfMemoryError();
assertError(outOfMemoryError, outOfMemoryError);

final DecoderException decoderException = new DecoderException(outOfMemoryError);
assertError(decoderException, outOfMemoryError);

final Exception e = new Exception();
e.addSuppressed(decoderException);
assertError(e, outOfMemoryError);

final int depth = randomIntBetween(1, 16);
Throwable cause = new Exception();
boolean fatal = false;
Error error = null;
for (int i = 0; i < depth; i++) {
final int length = randomIntBetween(1, 4);
for (int j = 0; j < length; j++) {
if (!fatal && rarely()) {
error = new Error();
cause.addSuppressed(error);
fatal = true;
} else {
cause.addSuppressed(new Exception());
}
}
if (!fatal && rarely()) {
cause = error = new Error(cause);
fatal = true;
} else {
cause = new Exception(cause);
}
}
if (fatal) {
assertError(cause, error);
} else {
assertFalse(Netty4Utils.maybeError(cause).isPresent());
}

assertFalse(Netty4Utils.maybeError(new Exception(new DecoderException())).isPresent());

Throwable chain = outOfMemoryError;
for (int i = 0; i < Netty4Utils.MAX_ITERATIONS; i++) {
chain = new Exception(chain);
}
assertFalse(Netty4Utils.maybeError(chain).isPresent());
}

private void assertError(final Throwable cause, final Error error) {
final Optional<Error> maybeError = Netty4Utils.maybeError(cause);
assertTrue(maybeError.isPresent());
assertThat(maybeError.get(), equalTo(error));
}

private BytesReference getRandomizedBytesReference(int length) throws IOException {
// we know bytes stream output always creates a paged bytes reference, we use it to create randomized content
ReleasableBytesStreamOutput out = new ReleasableBytesStreamOutput(length, bigarrays);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ setup:
- do:
indices.stats:
metric: [ translog ]
- set: { indices.test.primaries.translog.size_in_bytes: empty_size }
- set: { indices.test.primaries.translog.size_in_bytes: creation_size }

- do:
index:
Expand All @@ -27,9 +27,11 @@ setup:
- do:
indices.stats:
metric: [ translog ]
- gt: { indices.test.primaries.translog.size_in_bytes: $empty_size }
- gt: { indices.test.primaries.translog.size_in_bytes: $creation_size }
- match: { indices.test.primaries.translog.operations: 1 }
- gt: { indices.test.primaries.translog.uncommitted_size_in_bytes: $empty_size }
# we can't check this yet as creation size will contain two empty translog generations. A single
# non empty generation with one op may be smaller or larger than that.
# - gt: { indices.test.primaries.translog.uncommitted_size_in_bytes: $creation_size }
- match: { indices.test.primaries.translog.uncommitted_operations: 1 }

- do:
Expand All @@ -39,9 +41,10 @@ setup:
- do:
indices.stats:
metric: [ translog ]
- gt: { indices.test.primaries.translog.size_in_bytes: $empty_size }
- gt: { indices.test.primaries.translog.size_in_bytes: $creation_size }
- match: { indices.test.primaries.translog.operations: 1 }
- match: { indices.test.primaries.translog.uncommitted_size_in_bytes: $empty_size }
## creation translog size has some overhead due to an initial empty generation that will be trimmed later
- lt: { indices.test.primaries.translog.uncommitted_size_in_bytes: $creation_size }
- match: { indices.test.primaries.translog.uncommitted_operations: 0 }

- do:
Expand All @@ -59,7 +62,8 @@ setup:
- do:
indices.stats:
metric: [ translog ]
- match: { indices.test.primaries.translog.size_in_bytes: $empty_size }
## creation translog size has some overhead due to an initial empty generation that will be trimmed later
- lte: { indices.test.primaries.translog.size_in_bytes: $creation_size }
- match: { indices.test.primaries.translog.operations: 0 }
- match: { indices.test.primaries.translog.uncommitted_size_in_bytes: $empty_size }
- lte: { indices.test.primaries.translog.uncommitted_size_in_bytes: $creation_size }
- match: { indices.test.primaries.translog.uncommitted_operations: 0 }
Loading

0 comments on commit 79e3c07

Please sign in to comment.