Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Metrics Framework #10141

Open
Gaganjuneja opened this issue Sep 20, 2023 · 39 comments
Open

[RFC] Metrics Framework #10141

Gaganjuneja opened this issue Sep 20, 2023 · 39 comments
Assignees
Labels
distributed framework enhancement Enhancement or improvement to existing feature or request Roadmap:Stability/Availability/Resiliency Project-wide roadmap label

Comments

@Gaganjuneja
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Feature Request #1061
Feature Request #6533
PR - #8816

Describe the solution you'd like

Problem Statement

The current OpenSearch stats APIs offer valuable insights into the inner workings of each node and the cluster as a whole. However, they lack certain details such as percentiles and do not provide the semantics of richer metric types like histograms. Consequently, identifying outliers becomes challenging. OpenSearch needs comprehensive metric support to effectively monitor the cluster. Recent issues and RFCs have attempted to address this in a piecemeal fashion, and we are currently engaged in a significant effort to instrument OpenSearch code paths. This presents an opportune moment to introduce comprehensive metrics support.

Tenets

  1. Minimal Overhead – Metrics should not impose minimal overhead on system resources such as CPU and memory.
  2. No performance impact – Metrics should not adversely affect cluster operations in terms of performance.
  3. Extensible – The metrics framework should be easily extendable to accommodate background tasks, plugins, and extensions.
  4. Safety - The framework must ensure that instrumentation code does not result in memory leaks.
  5. Well defined Abstractions – Metrics frameworks should offer clear abstractions so that changes in implementation framework or API contracts do not disrupt implementing classes.
  6. Flexible – There should be a mechanism for dynamically enabling/disabling metrics through configurable settings.
  7. Do not reinvent - We should prefer to use out of the box solutions available instead of building something from scratch.

Metrics Framework

It is widely recognized that observability components like tracing and metrics introduce overhead. Therefore, designing the Metrics Framework for OpenSearch requires careful consideration. This framework will provide abstractions, governance, and utilities to enable developers and users to easily utilize it for emitting metrics. Let's delve deeper into these aspects: –

  1. Abstractions - While metrics frameworks like OpenTelemetry can be leveraged, we will abstract the solution behind the OpenSearch APIs. This future-proofs the core OpenSearch code where metrics are added, reducing the need for changes. Hooks for metrics have already been included in the Telemetry abstraction, following a similar pattern as the TracingTelemetry implementation.
  2. Governance - Similar to tracing, we should define mechanisms for enabling and disabling metrics at multiple levels.
  3. Code Pollution - To mitigate code pollution, we should provide utilities such as SpanBuilder to abstract away repetitive boilerplate code.

HLD

  1. Metric APIs - The Metrics API will facilitate the creation and updating of metrics. It should handle most of the heavy lifting and abstract the need for boilerplate code.
public interface Meter extends Closeable {

    /**
     * Creates the counter. This counter can increase monotonically.
     * @param name name of the counter.
     * @param description any description about the metric.
     * @param unit unit of the metric.
     * @return counter instrument.
     */
    Counter createCounter(String name, String description, String unit);

    /**
     * Creates the up/down counter. Value for this counter may go up and down so should be used
     * at places where negative, postive and zero values are possible.
     * @param name name of the counter.
     * @param description any description about the metric.
     * @param unit unit of the metric.
     * @return up/down counter instrument.
     */
    Counter createUpDownCounter(String name, String description, String unit);

    /**
     * Creates the histogram instrument which is needed if some values needs to be recorded against
     * some buckets, samples, etc.
     * @param name name of the counter.
     * @param description any description about the metric.
     * @param unit unit of the metric.
     * @return histogram instrument..
     */
    Histogram createHistogram(String name, String description, String unit);

    /**
     * Creates the Gauge which helps in recording the arbitrary/absolute values like cpu time, memory usage etc.
     * @param name name of the counter.
     * @param description any description about the metric.
     * @param unit unit of the metric.
     * @return counter.
     */
    Gauge createGauge(String name, String description, String unit);
}

  1. Storage - Metrics data need not be emitted with each event or request. Instead, they should be stored or buffered (like async loggers) and aggregated over a configurable time frame in an in-memory store before periodic emission.
  2. Sink - Periodically emitted metrics should be written to a configurable sink. Users should have the flexibility to define their preferred sink.

Implementation

OpenTelemetry offers decent support for metrics, and we can leverage the existing telemetry-otel plugin to provide the implementation for metrics as well.

Screenshot 2023-09-20 at 8 49 09 PM

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@Gaganjuneja Gaganjuneja added enhancement Enhancement or improvement to existing feature or request untriaged labels Sep 20, 2023
@Gaganjuneja
Copy link
Contributor Author

@reta @Bukhtawar @shwetathareja @backslasht @suranjay @rishabh6788 @nknize @dblock Please provide your inputs.

@msfroh
Copy link
Collaborator

msfroh commented Sep 20, 2023

Does this give us the opportunity to collect a bunch of counters related to a single end-to-end execution of some "operation"? As a start, will we be able to get an implementation that logs a bunch of counters for every HTTP request, so we'll be able to do out-of-process aggregation / computation of percentiles?

The current stats-based approach that uses global (or per-node, per-index, per-shard) counters obscures a lot of useful information. For example, maybe I see stats showing that we're spending a lot of time doing thing A and a lot of time doing thing B. Are we spending a lot of time doing A and B in the same requests? Are some requests spending time on thing A, while other requests are spending time on thing B? Was there only once request that spent a really long time doing thing B?

Request-level collection of counters would be extremely useful.

@reta
Copy link
Collaborator

reta commented Sep 21, 2023

@Gaganjuneja thanks for putting this one up

public interface Meter extends Closeable {

I believe we should rename it to MeterRegistry or MetricRegistry since Meter is usually an individual metric. But that is minor, what worries me a lot is the amount of the abstractions we will have to introduce to facade the different metric providers (Otel, Dropwziard, Micrometer, ....), I believe it is going to be massive.

@Gaganjuneja
Copy link
Contributor Author

Does this give us the opportunity to collect a bunch of counters related to a single end-to-end execution of some "operation"? As a start, will we be able to get an implementation that logs a bunch of counters for every HTTP request, so we'll be able to do out-of-process aggregation / computation of percentiles?

The current stats-based approach that uses global (or per-node, per-index, per-shard) counters obscures a lot of useful information. For example, maybe I see stats showing that we're spending a lot of time doing thing A and a lot of time doing thing B. Are we spending a lot of time doing A and B in the same requests? Are some requests spending time on thing A, while other requests are spending time on thing B? Was there only once request that spent a really long time doing thing B?

Request-level collection of counters would be extremely useful.

Yes, we can generate the Request level counters and connect the traces with metrics to see both aggregated and request level view.

@reta
Copy link
Collaborator

reta commented Sep 22, 2023

As a start, will we be able to get an implementation that logs a bunch of counters for every HTTP request, so we'll be able to do out-of-process aggregation / computation of percentiles?

@msfroh could you give an example of what kind of counters for every HTTP request you are looking for?

For example, maybe I see stats showing that we're spending a lot of time doing thing A and a lot of time doing thing B

I believe with traces you should naturally get that, since A and B should be denoted by span or span hierarchy.

Yes, we can generate the Request level counters and connect the traces with metrics to see both aggregated and request level view.

@Gaganjuneja To be fair, I really doubt that we could (and even should) do that, however we could add events + attributes to the spans and than query / aggregate over them as need.

@Gaganjuneja
Copy link
Contributor Author

@Gaganjuneja To be fair, I really doubt that we could (and even should) do that, however we could add events + attributes to the spans and than query / aggregate over them as need.

Yes, we can do that only challenge comes here is with sampling.

@Gaganjuneja
Copy link
Contributor Author

@Gaganjuneja thanks for putting this one up

public interface Meter extends Closeable {

I believe we should rename it to MeterRegistry or MetricRegistry since Meter is usually an individual metric. But that is minor, what worries me a lot is the amount of the abstractions we will have to introduce to facade the different metric providers (Otel, Dropwziard, Micrometer, ....), I believe it is going to be massive.

I totally agree with you @reta here. We can start small and keep on adding the instruments based on the requirements. Like for now we can just start with counter and keep in facade simple as shown in the interface. I understand the above interface is motivated from the Otel schema but we should be able to implement it for most of the metering solutions.

@msfroh
Copy link
Collaborator

msfroh commented Sep 22, 2023

@msfroh could you give an example of what kind of counters for every HTTP request you are looking for?

You bet! If I could have my every per-request metrics record wish granted, it would include things like (in priority order):

  1. Overall request latency
  2. Counters related to the search request (query clause count, 0/1 counter on presence of various features -- different aggregations, collapse, rescore, etc, request source size in bytes).
  3. Counters related to the search response (total hits, hits returned, number of aggregation buckets in the response, response size in bytes).
  4. A 0/1 failure counter.
  5. Latency of each search phase.
  6. 0/1 failure counters for each search phase.
  7. Shard success/failed counters for each search phase.
  8. Total CPU cost of the request.
  9. CPU cost broken down by search phase, per shard.

And of course, I would love to be able to correlate all of this back to the original query (not necessarily from a time series system, since I would expect that to store aggregate histograms of counters, but I should be able to fetch the raw logs that were aggregated for ingestion into the time series data store).

@reta
Copy link
Collaborator

reta commented Sep 22, 2023

  1. Overall request latency
  2. Latency of each search phase.

This is the goal of tracing - you get out of the box

  1. Counters related to the search request (query clause count, 0/1 counter on presence of various features -- different aggregations, collapse, rescore, etc, request source size in bytes).
  2. Counters related to the search response (total hits, hits returned, number of aggregation buckets in the response, response size in bytes).
  3. A 0/1 failure counter.

Those could be attach to each span at the relevant subflow

  1. Total CPU cost of the request.
  2. CPU cost broken down by search phase, per shard.

This could extremely difficult and expensive to track but we do try to track the CPU cost of search task per shard, that could be attached to task spans as well.

@msfroh
Copy link
Collaborator

msfroh commented Sep 22, 2023

@reta -- so all my wishes can potentially be granted? Can I have a pony too? 😁

@reta
Copy link
Collaborator

reta commented Sep 22, 2023

haha @msfroh one more

And of course, I would love to be able to correlate all of this back to the original query (not necessarily from a time series system, since I would expect that to store aggregate histograms of counters, but I should be able to fetch the raw logs that were aggregated for ingestion into the time series data store).

Trace ids could be (and should be) in logs so it should be possible to correlate all logs for specific trace and request

@Gaganjuneja
Copy link
Contributor Author

  1. Overall request latency
  2. Latency of each search phase.

This is the goal of tracing - you get out of the box

  1. Counters related to the search request (query clause count, 0/1 counter on presence of various features -- different aggregations, collapse, rescore, etc, request source size in bytes).
  2. Counters related to the search response (total hits, hits returned, number of aggregation buckets in the response, response size in bytes).
  3. A 0/1 failure counter.

Those could be attach to each span at the relevant subflow

  1. Total CPU cost of the request.
  2. CPU cost broken down by search phase, per shard.

This could extremely difficult and expensive to track but we do try to track the CPU cost of search task per shard, that could be attached to task spans as well.

This information is already available as part of TaskResourceTracking framework which can be reused. Not sure if this should be attached as an event or attribute to the span.

Only use case where we need the metric and trace integration is - Let's say we have a one metric graph for HttpStatusCode and it shows peek for statusCode=500 now we may want to see all the requests failed during this peek Now,

  1. If a metric has traceIDs associated, it can immediately pull those traces and we should be able to figure out the reason hopefully.
  2. We can query the traces for that time range and with httpStatuCode=500 and do the evaluation.

So, I agree we should be able to handle the storage side instead of integrating in the server because the sampling issue still persists there as well.

@Gaganjuneja
Copy link
Contributor Author

Hi @reta
I've been exploring various metric solutions and their suitability for our use cases. While these tools employ distinct methods for metric collection, there are still some common elements. Let's delve into the various types of meters available:

  1. Counters
    Counters are relatively straightforward, with most of them sharing a common set of parameters. One significant difference lies in how some tools handle counters for incrementing and decrementing values. Therefore, it might be beneficial to maintain two different counter types for increasing values and values that can move both up and down.

a. Increasing Counters
Counter createCounter(String name, String description, String unit);
b. UpDown Counters
Counter createUpDownCounter(String name, String description, String unit);

There's also a variation involving asynchronous counters that require a callback and operate at a specific cadence, which we can incorporate later based on our use cases.

  1. Timer/Histogram
    Metrics related to time, such as latencies, exhibit variations in implementation and differing opinions among tools. For instance, OpenTelemetry (OTel) generalizes these as Histograms, whereas other tools like Micrometer and Dropwizard provide a dedicated timer interface for latency metrics, though they function similarly to histograms. Differences may still exist in how quantiles are handled, but we can maintain consistent API signatures by following the structure below. If we implement the Histogram API, it can serve as a generic solution for all bucket-based aggregations.

Histogram createHistogram(String name, String description, String unit, List<Double> buckets);

  1. Gauges
    Gauges follow a similar structure, but each tool has its unique implementations and classes. To standardize this, we can use the following signature, which defines a natural schema for Gauge metrics and can be adapted for different tools.

Gauge createGauge(String name, Observable observable, String description, String unit);

Given the clarity around counters, I propose that we commence by implementing support for counters first and then gradually expand the framework. I would appreciate your thoughts on this.

@reta
Copy link
Collaborator

reta commented Sep 25, 2023

Thanks @Gaganjuneja

This information is already available as part of TaskResourceTracking framework which can be reused. Not sure if this should be attached as an event or attribute to the span.

Correct for search tasks, which are probably the ones which deserve such tracking

Only use case where we need the metric and trace integration is - Let's say we have a one metric graph for HttpStatusCode and it shows peek for statusCode=500 now we may want to see all the requests failed during this peek Now,

Adding traceIDs to the metric would blow the amount of unique time series and very likely would not be useful, however since each span has attributes (status codes) and time (start/stop), you could easily query the tracing storage instead for insights (metric would indicate the problem, traces would help to pinpoint it).

@Gaganjuneja
Copy link
Contributor Author

Gaganjuneja commented Sep 25, 2023

Thanks @Gaganjuneja

This information is already available as part of TaskResourceTracking framework which can be reused. Not sure if this should be attached as an event or attribute to the span.

Correct for search tasks, which are probably the ones which deserve such tracking

Only use case where we need the metric and trace integration is - Let's say we have a one metric graph for HttpStatusCode and it shows peek for statusCode=500 now we may want to see all the requests failed during this peek Now,

Adding traceIDs to the metric would blow the amount of unique time series and very likely would not be useful, however since each span has attributes (status codes) and time (start/stop), you could easily query the tracing storage instead for insights (metric would indicate the problem, traces would help to pinpoint it).

Totally agreed.

@reta
Copy link
Collaborator

reta commented Sep 25, 2023

Given the clarity around counters, I propose that we commence by implementing support for counters first and then gradually expand the framework. I would appreciate your thoughts on this.

Sure, it make sense, thanks @Gaganjuneja, counters are the most basic metric out of all, would be great to unify API to support labels / tags / ... and other meta concepts than are applicable to all metrics.

@Gaganjuneja
Copy link
Contributor Author

@reta
I have analyzed the histogram/timer support in different solutions like otel, dropwizard and Micrometer and based on that I am proposing the following APIs for histogram.

Histogram createHistogram(String name, String description, String unit);

This is a generic api where the histograms buckets can be automatically provided by the implementation. Most likely we can go for exponential histogram gram createHistogram(String name, String description, String unit, List buckets);) gram createHistogram(String name, String description, String unit, List buckets);) in case of OpenTelementry.

Histogram createHistogram(String name, String description, String unit, List<Double> buckets);

Here, users can provide their own list of explicit buckets. This gives more control to the user in case they want to track the specific boundary ranges.

@reta
Copy link
Collaborator

reta commented Jan 23, 2024

Thanks @Gaganjuneja , I am unsure this is sufficient to understand what you are suggesting, what is Histogram? Could you please share something complete?

@Gaganjuneja
Copy link
Contributor Author

Understanding the distribution of response times or OpenSearch operation is crucial for assessing performance. Unlike average or median values, percentiles shed light on the tail end of the latency distribution, offering insights into user experience during extreme scenarios. For example, the 95th percentile represents the response time below which 95% of requests fall, providing a more realistic reflection of the typical user experience than average latency alone. This information is invaluable for system administrators and developers in identifying and addressing performance bottlenecks.

Various methods exist for calculating percentiles, with one straightforward approach involving the computation of several percentiles based on all data points. However, this method can be resource-intensive, requiring access to all data points. Alternatively, the use of a Histogram is an option. A Histogram stores the frequencies of different ranges or bins of values in a dataset.

Constructing a histogram typically involves the following steps:

  1. Data Binning:

    • The range of values in the dataset is divided into contiguous and non-overlapping intervals or bins.
  2. Counting Frequencies:

    • The number of data points falling within each bin is counted.
  3. Plotting:

    • Bins are represented on the horizontal axis, with corresponding frequencies (or percentages) plotted on the vertical axis.
    • The height of each bar reflects the frequency of data points in that bin.

Histograms are particularly useful for visualizing the distribution of continuous data, providing insights into central tendency, spread, skewness, and potential outliers. Patterns such as normal distribution, skewed distribution, or bimodal distribution can be easily identified, aiding in data interpretation.

In OpenTelemetry, histograms are a metric type that allows observation of the statistical distribution of a set of values over time, offering insights into spread, central tendency, and data shape.

Other telemetry tools like Dropwizard and Micrometer also support histograms. The major difference lies in how they define buckets:

  1. Otel provides two approaches: a) explicit bucketing, where users define their buckets, and b) exponential bucketing, where Otel automatically fits the distribution in a fixed number of buckets using scale and max buckets factor.
  2. Dropwizard uses an exponentially decaying reservoir to sample values for histograms, allowing efficient estimation of quantiles.
  3. Micrometer uses dynamic bucketing, adjusting bucket sizes automatically based on observed values.

To unify Histogram APIs across implementations, the following APIs are proposed:

Automatic Buckets approach - A generic API where histogram buckets are automatically provided by the implementation, likely following the exponential histogram option from Otel.

Histogram createHistogram(String name, String description, String unit);

Explicit bucket approach - Users can provide their list of explicit buckets, offering more control in tracking specific boundary ranges.

Histogram createHistogram(String name, String description, String unit, List<Double> buckets);

@reta
Copy link
Collaborator

reta commented Jan 24, 2024

Thanks for details, but I still don't see the API, what is Histogram in Histogram createHistogram(String name, String description, String unit, List<Double> buckets);? Could you please share the complete API skeleton to understand what the capabilities we would like to support. Thank you

@Gaganjuneja
Copy link
Contributor Author

Thanks for details, but I still don't see the API, what is Histogram in Histogram createHistogram(String name, String description, String unit, List<Double> buckets);? Could you please share the complete API skeleton to understand what the capabilities we would like to support. Thank you

Sure, I took it literally :)

@Gaganjuneja
Copy link
Contributor Author

As mentioned above CreateHistogram will create the Histogram metric instrument.

Histogram is a metric Instrument like Counter which will help in recording the values and build the histogram underneath. For now, this will be implemented to delegate the record calls to Otel.

public interface Histogram {

    /**
     * record the value.
     * @param value value to be recorded.
     */
    void record(double value);

    /**
     * record value along with the tags.
     *
     * @param value value to be recorded.
     * @param tags  attributes/dimensions of the metric.
     */
    void record(double value, Tags tags);

}

@Gaganjuneja
Copy link
Contributor Author

@reta, your thoughts here?

@reta
Copy link
Collaborator

reta commented Jan 26, 2024

@reta, your thoughts here?

I am trying to figure out 2 things here:

  1. Not all metric providers allow to parameterize buckets
  2. Adding tags per value void record(double value, Tags tags); (the same applies to Counters) looks very unique to OpenTelemetry Metrics (none of others support it). How does it work?

@Gaganjuneja
Copy link
Contributor Author

  1. Not all metric providers allow to parameterize buckets

Yes, most of the tools provide the dynamic bucketing or percentile calculations. We can also for now keep the API simple and provide the metric provider level config. We can live with one single API for now.

Histogram createHistogram(String name, String description, String unit);

Adding tags per value void record(double value, Tags tags); (the same applies to Counters) looks very unique to OpenTelemetry Metrics (none of others support it). How does it work?

Yes, more generalised in Otel but we need this feature to create the dimensions based on data like index name or shardId etc. There are ways to achieve this in other tools by overriding the metricsRegistry and storage where we can explicitly keep the metrics at dimension level. Otel provides this out of the box.

@reta
Copy link
Collaborator

reta commented Jan 26, 2024

Yes, more generalised in Otel but we need this feature to create the dimensions based on data like index name or shardId etc.

The purpose of tags is clear but this is usually done on per meter level (like histogram, counter, ..). I am trying to understand how does it work on value level - would it be represented by multiple meters at the end? (we would need to add the javadocs to explain the behaviour anyway)

@Gaganjuneja
Copy link
Contributor Author

Yes, more generalised in Otel but we need this feature to create the dimensions based on data like index name or shardId etc.

The purpose of tags is clear but this is usually done on per meter level (like histogram, counter, ..). I am trying to understand how does it work on value level - would it be represented by multiple meters at the end? (we would need to add the javadocs to explain the behaviour anyway)

Sure I will add the java docs. It works like. It creates a Map<Meter, Map<UniqueTagsList, value>>. There will be distinct value per meter and unique tags combination so Yes we can say that it's represented by multiple meters at the end. This is the sample exported output of otel metrics.

{
    "resourceMetrics": [{
        "resource": {
            "attributes": [{
                "key": "service.name",
                "value": {
                    "stringValue": "OpenSearch"
                }
            }]
        },
        "scopeMetrics": [{
            "scope": {
                "name": "org.opensearch.telemetry"
            },
            "metrics": [{
                "name": "search.query.type.range.count",
                "description": "Counter for the number of top level and nested range search queries",
                "unit": "1",
                "sum": {
                    "dataPoints": [{
                        "attributes": [{
                            "key": "level",
                            "value": {
                                "intValue": "1"
                            }
                        }],
                        "startTimeUnixNano": "1698166025536126716",
                        "timeUnixNano": "1698166085536127266",
                        "asDouble": 4
                    }, {
                        "attributes": [{
                            "key": "level",
                            "value": {
                                "intValue": "0"
                            }
                        }],
                        "startTimeUnixNano": "1698166025536126716",
                        "timeUnixNano": "1698166085536127266",
                        "asDouble": 2
                    }],
                    "aggregationTemporality": 1,
                    "isMonotonic": true
                }
            }]
        }]
    }]
}

@reta
Copy link
Collaborator

reta commented Jan 26, 2024

Got it, thank you, I think we are good to go with Histograms :)

@Gaganjuneja
Copy link
Contributor Author

Extending it further to add support for Gauge which is a current value at the time it is read. Gauges can be synchronous as well as asynchronous.

Synchronous Gauge - Synchronous Gauge is normally used when the measurements are exposed via a subscription to change events.
Asynchronous Gauge - When the measurement is exposed via an accessor, use Asynchronous Gauge to invoke the accessor in a callback function

I think, Asynchronous Gauge make more sense in our use cases to record the resource utilisation in periodic manner. It can also be used to capture queue size at any point in time etc. Anyways synchronous Gauge is experimental in open telemetry (refer otel discussion - open-telemetry/opentelemetry-java#6272)

Proposed API -

ObservableGauge createObservableGauge(String name, String description, String unit, ObservableInstrument observableInstrument);

ObservableInstrument

public interface ObservableInstrument {

    /**
     * Value supplier
     * @return supplier
     */
    Supplier<Double> getValue();

    /**
     * tags
     * @return tags
     */
    Tags getTags();

}

ObservableGauge

public interface ObservableGauge {

    /**
     * removes the registered Observable
     */
    void remove();

}

@reta, your thoughts here?

@reta
Copy link
Collaborator

reta commented Mar 11, 2024

@Gaganjuneja hm ... the gauge part is looking like an option, but I believe we should not be driven by OTEL APIs but keep out APIs consistent instead:

Disposable createDoubleGauge(String name, String description, String unit, Supplier<Double> valueProvider, Tags tags);

The Disposable could be modelled after reactor.core.Disposable & Co (reactive streams) - it is not specific to gauge (or any meter in general), but indicates a general resource (meter in this case) with explicit lifecycle. We could also just use Closeable instead.

The other question I have is that the the it is inconsistent with other meters: those accept tags per each value, I think we should stick to this for gauges as well.

@Gaganjuneja
Copy link
Contributor Author

Thanks @reta for reviewing.

  1. Agree on Disposable/Closable.
  2. Tags - Gauge is a asynchronous meter. Asynchronous counters would also have similar signature. As it's going to be registered and scheduled so tags per value would remain same. To make the API consistent we will have to provide that either as part of supplier or the way you suggested.

I think we can go ahead with this for now.

Disposable createDoubleGauge(String name, String description, String unit, Supplier<Double> valueProvider, Tags tags);

@reta
Copy link
Collaborator

reta commented Mar 12, 2024

Thanks @Gaganjuneja

  1. Agree on Disposable/Closable.

Using Closable would be preferred I think - we could use the Java standard library

@zalseryani
Copy link

Greetings @Gaganjuneja

Kindly is there a way currently to export metrics/spans to signoz?

I mean the configuration currently in the docs states that grpc only export to localhost, is there a way to configure opensearch to export metrics/spans to specific endpoint i.e. configurable URL for a remote collector. ?

Thank you.

@Gaganjuneja
Copy link
Contributor Author

external signoz we can do through local signoz/otel collector for now. We can externalise this property as a config.

@zalseryani
Copy link

zalseryani commented Apr 17, 2024

@Gaganjuneja
also, I want mention that we tried to configure metrics and spans on local log files, the files are being created but no logs are added to those files, both are kept empty even when executing some queries on indices.

we followed the documentation step by step and configured everything related.

any suggestions?

opensearch is deployed with helm chart, we tested it on version 2.12 and 2.13

Thank you.

@Gaganjuneja
Copy link
Contributor Author

@zalseryani strange. Could you please share the exact steps you followed and the value for the following settings

  1. telemetry.tracer.enabled
  2. telemetry.feature.tracer.enabled

@lukas-vlcek
Copy link
Contributor

Hi,

I am trying to understand integration with Prometheus.
As far as I understand Prometheus strongly prefers a pull model (ie. Prometheus server is the one to scrape the target node(s)), although there seem to be ways how to push metrics to Prometheus this comes with operational and configuration overhead and usually requires sidecar instances.

What is the plan to enable integration with Prometheus? What is the role of the Collector box in the schema above? Is that any "user managed" component? Is that the component that would be scraped by Prometheus or the one that would eventually push metrics to Prometheus?

What is the function of Periodic exporter? Does it periodically push in memory cache of collected metrics to Collector?

When metric store/sink requires specific format of the data who and when is responsible for the conversion?

@zalseryani
Copy link

zalseryani commented Apr 22, 2024

@Gaganjuneja

Thank you for your time, we were missing the second config you shared with us telemetry.feature.tracer.enabled, but it is worth noting that this one was not mentioned in the docs.

  • distributed tracing

  • also we found another typo in the docs related to the traces class for grpc protocol, it is mentioned incorrectly, and it should be io.opentelemetry.exporter.otlp.metrics.OtlpGrpcMetricExporter instead of this org.opensearch.telemetry.tracing.exporter.OtlpGrpcSpanExporterProvider , because if we use the latter one, get the following error in opensearch logs that the following class was not found.

uncaught exception in thread [main]
java.lang.IllegalArgumentException: Failed to parse value [org.opensearch.telemetry.tracing.exporter.OtlpGrpcSpanExporterProvider] for setting [telemetry.otel.tracer.span.exporter.class]
Likely root cause: java.lang.ClassNotFoundException: org.opensearch.telemetry.tracing.exporter.OtlpGrpcSpanExporterProvider
	at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
	at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
	at java.base/java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:872)
	at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
	at org.opensearch.telemetry.OTelTelemetrySettings.lambda$static$0(OTelTelemetrySettings.java:86)
	at java.base/java.security.AccessController.doPrivileged(AccessController.java:571)
	at org.opensearch.telemetry.OTelTelemetrySettings.lambda$static$1(OTelTelemetrySettings.java:84)
	at org.opensearch.common.settings.Setting.get(Setting.java:483)
	at org.opensearch.common.settings.Setting.get(Setting.java:477)
	at org.opensearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:644)
	at org.opensearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:549)
	at org.opensearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:519)
	at org.opensearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:489)
	at org.opensearch.common.settings.SettingsModule.<init>(SettingsModule.java:178)
	at org.opensearch.node.Node.<init>(Node.java:591)
	at org.opensearch.node.Node.<init>(Node.java:417)
	at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:242)
	at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:242)
	at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:404)
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:181)
	at org.opensearch.bootstrap.OpenSearch.execute(OpenSearch.java:172)
	at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104)
	at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
	at org.opensearch.cli.Command.main(Command.java:101)
	at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:138)
	at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:104)
For complete error details, refer to the log at /usr/share/opensearch/logs/docker-cluster.log
  • One more thing please, regarding the GRPC traces with remote connection (instead of localhost) you told me in the upper comment that We can externalize this property as a config, is configurable now or you meant to be done in the future ?

Appreciating your time and efforts :)

@Gaganjuneja
Copy link
Contributor Author

@zalseryani, Thanks for your note. We can do it in the upcoming release. Would you mind creating an issue for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
distributed framework enhancement Enhancement or improvement to existing feature or request Roadmap:Stability/Availability/Resiliency Project-wide roadmap label
Projects
Status: New
Development

No branches or pull requests

8 participants