Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Component: Encoding Extension #28686

Closed
MovieStoreGuy opened this issue Oct 11, 2022 · 15 comments
Closed

New Component: Encoding Extension #28686

MovieStoreGuy opened this issue Oct 11, 2022 · 15 comments

Comments

@MovieStoreGuy
Copy link
Contributor

Is your feature request related to a problem? Please describe.
As one of the code owners for awskinesisexporter, and kafkaexporter, several issues get raised asking if we can support x encoding standard for the messages being written to these services.

My typical response that I am happy to accept any open data format standard but for any vendor specific encoding, it would need to be moved to a vendor specific exporter, ie: cloudwatchawskinesisexporter for cloudwatch specific encoding.

This is problematic because it makes onboarding new or existing encoding for given vendors (or even versioned encodings of currently supported ones) have a high barrier to entry and these vendor specific reimplement what is already existing code in some cases.

Describe the solution you'd like
What I am currently thinking is that the main opentelemetry-collector defines:

  • An Encoding extension, to allow for supporting components (receivers, exporters) to know what they access to
  • An Encoding factory that new encodings can be registered with on application start (same with how current components are registered)
  • A httpreceiver and httpexporter that can accept the given encoding on a given endpoint, ie (reciever endpoint = /v1/zipkin/ingestion , encoding = otlpproto)
  • An Encoding interface that defines version, name and Marshal* (I can't remember the exact name for the pdata marshaler's methods)

The reason being to include version is that this can be bumped as a side affect of dependency updates of a given release of the collector(contrib) which downstream systems could rely on being a fixed version.
(There was an open issue for kafkaexporter that was this case, I will need to find it to link it back)

Describe alternatives you've considered
Not sure any alternatives that would allow for reuse of encodings in packages but we could continue to have vendor specific exporters that extend base receivers.

Additional context

A nice little side affect of this approach is that vendor exporters could just be config alias with expected configuration.
Ie:

  • influxdbexporter can just define the static fields in config that set the encoding to be one of influxdbproto, influxdbjson with a default exporter endpoint and security headers.
@atoulme
Copy link
Contributor

atoulme commented Mar 25, 2023

It would be helpful for our use cases too. I support this move. I see it with azureeventhubreceiver too, we need to have the ability to reuse kafka codecs.

@MovieStoreGuy MovieStoreGuy changed the title Encoding integrations New Component: Encoding Extension Mar 25, 2023
@MovieStoreGuy
Copy link
Contributor Author

To help give a more technical overview and collect the ideas that I have, I will brain dump them here and start getting feedback:

Overall Goal

The encoding extension is to provide at its core:

  • A standard means of sharing codecs between receivers and exporters
  • Provide a means of version codecs to improve wire stability

This is to help ensure that across collector versions that a package upgrade doesn't break backwards compatibility when trying to produce/consumer data. (This is highly important for streaming applications like Kafka, Kinesis, Pulsar).

This should be considered safe enough to be part of core and contrib distributions of the collector.

Stretch Goals

I haven't looked at the feasibility with considerable detail, but the idea is:

  • External supplied codecs

The idea here is that you could theoretically connect to something like AWS Glue or Kafka's schema registry where these codecs can be externally managed and implemented within the extension.

The concerns I have about feasibility here is:

  • Performance
  • Mappings

It has been request a few times for the Kafka exporter to provide a user supplied mapping from otlp to format of supplied choice which isn't possible today.

Outline

In order to implement this component, it would require the two things:

  1. Definitions to be defined for codecs
  2. An access pattern to pass versioned codecs to components

To achieve the first goal, a codec would have the following interface:

package codec

type Metric interface {
    pmetric.Marshaler
    pmetric.Unmarshaler
}

type Log interface {
    plog.Marshaler
    plog.Unmarshaler
}

type Trace interface {
    ptrace.Marshaler
    ptrace.Unmarshaler
}

The rationale for this approach is that these interfaces already exist and require minimal effort to implement the otlp codec, and it has been shown it is repeatable across the project with minimal code changes.

(Optional): An additional higher order type of codec to be defined that allows for length delimited encoding, ie:

message LengthDelimited {
    // Group if group has been set, the original data was broken up
    // and requires to be merged by `Order` value 
    int64 group = 1;
    // Count defines how messages are expected 
    // in order to generate the original value
    int64 count = 2;
    // Order defines where this message should be
    // placed in order to reconstruct the message
    int64 order = 3;
    // Data represents the actual codec that is being transported
    bytes data  = 4;
}

On the receiver side, it would mean if there was a group set, it would need to wait for some configured amount of time to reconstruct the original message but the logic would be:

func (rec *receiver) ProcessMessage(msg LengthDelimeted)
  if msg.group == 0 {
    rec.emit(rec.codec.Marshal(msg.data)) # The data didn't require to be split in order to transport
  }
  msgs = append(rec.group[msg.group], msg)
  if len(msg) != msg.count {
    rec.group[msg.group] = msgs
    return nil
  }
  sort.Sort(msgs, func(i, j int) bool {
    return msgs[i].order < msgs[j].order
  })
  var data bytes.Buffer
  for _, msg := range msgs {
     data.Write(msg.data)
  }
  rec.emit(rec.code.Marshal(data.Bytes()))
}

The above misses the idea of group by timeout but it is mostly the logic required for the receiver to join the values together again.

The rationale for why I'd like this is that it means that exporters that have fixed byte size limits (like Kafka and Kinesis) no longer have to bin pack each value in the marshalled codec format and potential drop values that are too big (which is becoming more and more likely with logging becoming more adopted).

A nice side affect of this is that we can remove the MarshalSizer logic from pdata since if something has a max allowed byte limit, it can use the higher order abstract of fixed byte limit encoder.

Furthermore, the reason why I want to define this in the codec's package instead of leveraging something that is more native to the service, is to reduce the complexity required in order to send data to it and keep it open to be used external from the collector to provide that flexibility.

The extension can be simply defined as:

package code

type Type struct {
   Name     string
   // Version defines the specific codec required
   // If the value is blank, then it is assumed to be the latest.
   Version  string
}
package encodingextension

type CodecFactory interface {
   // NewCodec creates a new codec type 
   // and returns an error if name and or version doesn't exist
   NewCodec(ct codec.Type) (codec.Codec, error)
}

The codecs that are accessible within the factory are those defined in pkg/codec (I believe this is where they currently exist).

Future impact

These are rather loose thoughts that I've come up with that could help extend the project further and reduce the potential toil experienced developing it.

Removal of vendor specific receivers or exporters to aliases

In a rather abstract terms of thinking, all receivers / exporters are effectively transport + codec.
The idea here is that we can provide a http, tcp, and udp receiver and exporter where it expects a codec (plus some extra config for auth and such).
Meaning if we have a codec for zipkin, jaeger, otlp, or vendor codec, we can very quickly have components that implement the required translation for service X without needed to recreate the existing transport with its respective lifecycles.

Now, this does mean it shifts left a lot of the configuration to the user since you now would have to pick transport + codec (+ plus any extras like auth and such). This is where the idea of an Component Alias would be handy since a you could define an alias for a specific vendor that abstracts the require config and works as if it were the original implemented component. ie:

---
receiver:
   chrony: # This could be an alias for what is defined below
      endpoint: unix:///var/run/chronyd.sock
      timeout: 10s
   udp:
      endpoint: unix:///var/run/chroynd.sock
      timeout: 10s
      socket_type: AF_UNIX
      codec:
         name: chrony

These would be considered equivalent in their implementation.

@aboguszewski-sumo
Copy link
Member

@MovieStoreGuy this looks promising and we also would benefit from adding it. Few questions though:

  • Will it be possible to define codecs as standalone components and just include them in an OTC distro, like it is currently done with other components? It seems as a good way to pass versioned codecs to components, but I don't know the internals of otelcol well yet.
  • How do you envision the development of it? We currently want to add a custom marshaller to the AWS S3 exporter and we want to do this as soon as possible, so we will probably need to fork the component to do this (because adding it to the upstream wouldn't make much sense if such feature is coming). When do you think that, for example, a demo interface will be available so that we can write it in such way that the transition to this extension will be as harmless as possible? Additionally, we probably would be able to give some help in implementing the extension.
  • Do you think it would be possible to chain codecs? For example in our case above, we would use a transport exporter, then s3 codec and our custom codec (in whatever appropriate order).

@MovieStoreGuy
Copy link
Contributor Author

Will it be possible to define codecs as standalone components and just include them in an OTC distro, like it is currently done with other components?

Technically possible, since I imagine each would be their own package. Not to sure if the packages are public facing though. I should clarify when I say versioned codecs, I mean we are following how go intends you to my breaking versioning as defined by their Module release page, maybe not as strictly so the project doesn't need to release special tags for codecs but rather have a package have definitions of v1, v2, vX, etc.
Similar to the semconv package if you're familiar.

How do you envision the development of it?

My plan was to start it this week, and I have a draft in my local. Just trying to do all the wiring of it all.

Do you think it would be possible to chain codecs?

This makes me nervous, mostly because converting outside the internal format of the collector means some sort of dynamic adapter which I'd rather not do for the sake of keeping it simple.

The way I imagine components to consume the extension would look something like this:

extensions:
  encoding:
    # some time in the distance future, references remote schema sources

exporters:
  awss3:
    encoding: zipkin/v1-json
    compression: zflate

Might make sense to have a wire format field that is like protobuf, json, text, etc

@MovieStoreGuy
Copy link
Contributor Author

Linking issue for codecs: #21152

@atoulme
Copy link
Contributor

atoulme commented Jun 7, 2023

Do you need a sponsor @MovieStoreGuy ? Is core the right repo or is contrib easier?

MovieStoreGuy referenced this issue Jul 10, 2023
**Description:**

Adding the required steps to help manage a new component and reduce the
amount of noise generated in one PR.

**Link to tracking Issue:** 

https://github.com/open-telemetry/opentelemetry-collector/issues/6272

**Testing:** 
NA

**Documentation:**

Added the rough outline of the component development to provide more of
a public idea of the project outline.
Not strictly tied to it since it is more a user facing version of the
linked issue.
@MovieStoreGuy
Copy link
Contributor Author

It would nice to have another lead sponsor this to help keep the idea flowing.

For the time being, I think it makes sense in contrib to avoid code duplications.

@dmitryax
Copy link
Member

I missed this proposal. It looks pretty promising and has some overlap with the work that I'm doing as part of open-telemetry/opentelemetry-collector#8122. I believe we can work independently up to introducing LengthDelimited. Then we should look into possible ways to converge.

@atoulme
Copy link
Contributor

atoulme commented Sep 25, 2023

imo we're ways away from getting to LengthDelimited. There's a lot of work to get the extension adopted across our components, and specifically to reduce our reliance on the same marshalers and unmarshalers across multiple components right now.

MovieStoreGuy referenced this issue Sep 29, 2023
**Description:** 
Implement a first codec for the encoding extension.

**Link to tracking Issue:**
https://github.com/open-telemetry/opentelemetry-collector/issues/6272

**Testing:**
Unit tests.

**Documentation:**
Some package docs.

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
dmitryax referenced this issue Oct 19, 2023
…#27484)

**Description:** Create a new extension for JSON. This will be used in
pulsarreceiver/kafkareceiver to populate the log record's map from the
raw body.

**Link to tracking Issue:**
[6272](https://github.com/open-telemetry/opentelemetry-collector/issues/6272)

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
martin-majlis-s1 referenced this issue in scalyr/opentelemetry-collector-contrib Oct 20, 2023
…open-telemetry#27484)

**Description:** Create a new extension for JSON. This will be used in
pulsarreceiver/kafkareceiver to populate the log record's map from the
raw body.

**Link to tracking Issue:**
[6272](https://github.com/open-telemetry/opentelemetry-collector/issues/6272)

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
@yurishkuro
Copy link
Member

Clarification: do I understand correctly that this is not a new component type, just a regular extension? I.e. in order to use it another component would need to look up this extension by ID, and cast it to a specific interface, right?

@dmitryax
Copy link
Member

Clarification: do I understand correctly that this is not a new component type, just a regular extension? I.e. in order to use it another component would need to look up this extension by ID, and cast it to a specific interface, right?

That's correct. We drifted from the design defined in this issue, though. The encoding now is a class of extensions implementing Marshal/Unmarshal, not the only extension. Components can point to any of those to get a specific format. The discussion to follow is in #27092.

@yurishkuro
Copy link
Member

@MovieStoreGuy @dmitryax what about the startup order of extensions, is that supported somehow? E.g. if one uses a tap / websocket extension, those could also benefit from encoding extensions, but the encoding extensions would need to be initialized before others.

@dmitryax
Copy link
Member

We never had any extensions depending on each other so far. The encoding extensions are meant to be used by the receivers and exporters. tap / websocket extension should be a processor AFAIK

sigilioso referenced this issue in carlossscastro/opentelemetry-collector-contrib Oct 27, 2023
…open-telemetry#27484)

**Description:** Create a new extension for JSON. This will be used in
pulsarreceiver/kafkareceiver to populate the log record's map from the
raw body.

**Link to tracking Issue:**
[6272](https://github.com/open-telemetry/opentelemetry-collector/issues/6272)

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
@dmitryax dmitryax transferred this issue from open-telemetry/opentelemetry-collector Oct 30, 2023
dmitryax added a commit that referenced this issue Oct 30, 2023
…8688)

**Description:** We should have explicit interfaces for the encoding
extensions, which should be used by the receivers/exporters instead of
marshallers and unmarshallers

**Link to tracking Issue:**
#28686
dmitryax added a commit that referenced this issue Nov 1, 2023
**Description:** Fix bug when err is nil if an invalid version value is
supplied.

**Link to tracking Issue:** #28686

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
jmsnll pushed a commit to jmsnll/opentelemetry-collector-contrib that referenced this issue Nov 12, 2023
…en-telemetry#28688)

**Description:** We should have explicit interfaces for the encoding
extensions, which should be used by the receivers/exporters instead of
marshallers and unmarshallers

**Link to tracking Issue:**
open-telemetry#28686
jmsnll pushed a commit to jmsnll/opentelemetry-collector-contrib that referenced this issue Nov 12, 2023
…8689)

**Description:** Fix bug when err is nil if an invalid version value is
supplied.

**Link to tracking Issue:** open-telemetry#28686

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
mackjmr added a commit to DataDog/opentelemetry-collector-contrib that referenced this issue Nov 13, 2023
* [receiver/collectd] Move to use HTTPServerSettings with collectdreceiver (open-telemetry#28812)

**Description:**
Overhauls collectdreceiver to use the latest config helper features

**Link to tracking Issue:**
Fixes open-telemetry#28811

**Documentation:**
No impact to docs. User interface remains the same.
Separate changelog to notice API breaking changes, as the Config struct
is changing.

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>

* [chore][exporter/datadog] Re-enable TestTraceExporter (open-telemetry#28827)

Re-enable TestTraceExporter.

Fixes
open-telemetry#27630

Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>

* [chore][receiver/hostmetrics] Skip process user error (un)muted test on non-Linux (open-telemetry#28829)

**Description:**
Fix open-telemetry#28828 - this is just disabling the test on non-Linux. The broken
test was introduced via open-telemetry#28661.

* [receiver/hostmetricsreceiver] Add support for cpu frequency metric (open-telemetry#27445)

**Description:** : Added support for host's cpu frequency as part
                   of the hostmetricsreceiver.

**Link to tracking Issue:** open-telemetry#26532

**Testing:**

1. Using the following configuration:
```yml
receivers:
  hostmetrics:
    collection_interval: 5s
    scrapers:
      cpu:
        metrics:
          system.cpu.frequency:
            enabled: true

processors:
  resourcedetection/system:
    detectors: ["system"]
    system:
      hostname_sources: ["lookup", "cname", "dns", "os"]
      resource_attributes:
        host.name:
          enabled: true
        host.id:
          enabled: true
        host.cpu.cache.l2.size:
          enabled: true
        host.cpu.family:
          enabled: true
        host.cpu.model.id:
          enabled: true
        host.cpu.model.name:
          enabled: true
        host.cpu.stepping:
          enabled: true
        host.cpu.vendor.id:
          enabled: true

service:
  pipelines:
    metrics:
      receivers: [hostmetrics]
      exporters: [file]
      processors: [resourcedetection/system]

exporters:
  file:
    path: ./output.json
```

2. Start the collector with ./bin/otelcontribcol_linux_amd64 --config
examples/host_config.yaml
3. The output reports the added metric successfully:

```json
{
   "resourceMetrics":[
      {
         "scopeMetrics":[
            {
               "scope":{
                  "name":"otelcol/hostmetricsreceiver/cpu",
                  "version":"0.85.0-dev"
               },
               "metrics":[
                  {
                     "name":"system.cpu.frequency",
                     "description":"Current frequency of the CPU core in MHz.",
                     "unit":"MHz",
                     "gauge":{
                        "dataPoints":[
                           {
                              "attributes":[
                                 {
                                    "key":"cpu",
                                    "value":{
                                       "stringValue":"cpu0"
                                    }
                                 }
                              ],
                              "startTimeUnixNano":"1696487580000000000",
                              "timeUnixNano":"1696512423758783158",
                              "asDouble":3000
                           },
                           {
                              "attributes":[
                                 {
                                    "key":"cpu",
                                    "value":{
                                       "stringValue":"cpu1"
                                    }
                                 }
                              ],
                              "startTimeUnixNano":"1696487580000000000",
                              "timeUnixNano":"1696512423758783158",
                              "asDouble":3000
                           },
...
```

Signed-off-by: ChrsMark <chrismarkou92@gmail.com>

* [encoding/zipkinencodingextension] add default case (open-telemetry#28689)

**Description:** Fix bug when err is nil if an invalid version value is
supplied.

**Link to tracking Issue:** open-telemetry#28686

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>

* [chore] Upgrade cloud.google.com/go (open-telemetry#28840)

To resolve failing build-and-test/checks CI job

**Link to tracking Issue:**
open-telemetry#28839

* [connector/exceptions] Add trace id and span id to generated logs (open-telemetry#28670)

**Description:** <Describe what has changed.>
The current implementation generates logs from recorded exceptions in
spans, but is not possible to see which traces and spans generated those
logs. This PR adds that information to the logs

**Link to tracking Issue:** Fixes open-telemetry#24407

* [chore][exporter/loadbalancing] use headless service with DNS mode in K8S(open-telemetry#27014) (open-telemetry#28687)

**Description:** <Describe what has changed.>
fix
open-telemetry#27014
notice when in K8S, the DNS mode should config a headless service

**Link to tracking Issue:** <Issue number if applicable>
open-telemetry#27014

* Update README.md (open-telemetry#28844)

The Prometheus Remote write exporter is missing the details of default
values for the remote write queue config. Added the values after looking
into the code for the same.

* exporter/datadog: disable APM stats via feature flag (open-telemetry#28616)

This change adds the "exporter.datadogexporter.disable_apm_stats"
feature flag, which can be enabled to disable APM stats computation.

Updates open-telemetry#28615

* [receiver/zipkin] follow receiver contract (open-telemetry#28627)

I came across `zipkinreceiver` and observed we don't
follow the receiver
[contract](https://github.com/open-telemetry/opentelemetry-collector/blob/b2961b799e2c1ec128f0539764af1fa10c839e04/receiver/doc.go#L21).
We return `InternalServerError` straight away without checking
permanent/non-permanent errors.

We should probably return BadRequest in case of permanent errors

open-telemetry/opentelemetry-collector#4335

**Testing:** Added test cases

Co-authored-by: Andrzej Stencel <astencel@sumologic.com>

* [chore][exporter/sumologicexporter] use errors.Join instead of go.uber.org/multierr (open-telemetry#28614)

**Description:** use errors.Join instead of go.uber.org/multierr

**Link to tracking Issue:** open-telemetry#25121 

---------

Co-authored-by: Andrzej Stencel <astencel@sumologic.com>

* [receiver/wavefront] wrap metrics receiver under carbon receiver instead of using export function (open-telemetry#27259)

**Description:** 
Wavefrontreceiver is very similar to carbonreceiver: it is TCP based in
which each received text line represents a single metric data point. In
order to avoid using exported function `carbonreceiver.New(...)`, we can
wrap metrics receiver under carbon receiver.

**Link to tracking Issue:** 

open-telemetry#27248

**Testing:** 
make chlog-validate
go test for wavefrontreceiver

**Documentation:**

---------

Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>

* [processor/k8sattributes] Fix node/ns labels/annotations extraction (open-telemetry#28838)

Set attributes from namespace/node labels or annotations even if
`k8s.namespace.name` and `k8s.node.name` are not extracted.

Fixes
open-telemetry#28837

* [processor/remoteobserver] rename to remotetapprocessor (open-telemetry#27874)

**Description:**
Rename remoteobserverprocessor to remotetapprocessor

**Link to tracking Issue:**
Fixes open-telemetry#27873

* [Spanmetrics] - Add exemplars to Sum metrics (open-telemetry#28671)

**Description:** 
We don't have exemplars added to Sum metrics right now. This PR provides
an enhancement to add exemplars to Sum metrics in Spanmetrics connector


**Testing:** 
Added unit tests and also tested it in our local environment.

* [chore] fix codeowners (open-telemetry#28855)

Regenerate codeowners with `make gengithub`

* feat(alertmanager): Add exporter factory and config (open-telemetry#27836)

**Description:** Factory implementation of Alertmanager Exporter
Initial PR - base configs and factory implementation

**Link to tracking Issue:**
[open-telemetry#23659](open-telemetry#23569)

**Testing:** Unit tests for config and factory implementation

**Documentation:** Readme and Sample Configs to use Alertmanager
exporter

---------

Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
Co-authored-by: Juraci Paixão Kröhling <juraci@kroehling.de>

* Adds duration sampler distinct from latency in supplying two bounds (open-telemetry#26115)

**Description:** Adds a bounded duration sampling processor, distinct
from the existing latency one in that it has both lower and upper bounds

Apologies for this appearing as a pull request out of nothing, my intent
had actually been to create a review area against my own fork and raise
an issue asking if you'd accept the PR. I think the need here is pretty
obvious from the context, though I think it's easy to imagine preferring
this to be a change to the existing processor. I raised as a new one as
I thought it might make existing behavior cleaner to retain.

**Link to tracking Issue:** As above this is a bit of a premature PR
since I intended to raise as an issue, and thus there isn't one, but I
think it's easy enough to deal with here so leaving open for now and
have learned GitHub's ways for the future (I rarely use github).

**Testing:** New module so associated tests are added showing all
relevant behavior, and passing.

**Documentation:** Updated README and example config

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* [googlemanagedprometheusexporter] Clarify support status of this exporter (open-telemetry#28863)

*   Link to related GCP docs
*   Clarify mention of "traces"
* Drop mention of PromQL support as a difference from `googlecloud`
exporter

* [filelogreceiver]: Add ability to sort by mtime (open-telemetry#28850)

**Description:** <Describe what has changed.>
* Adds a new `mtime` sort type, which will sort files by their modified
time
* Add a feature gate for `mtime` sort type

An optional follow-up performance improvement may be made here, to have
the finder return fs.DirEntry directly to query the mtime without making
an extra call to os.Stat for each file.

**Link to tracking Issue:** open-telemetry#27812

**Testing:**
* Added unit tests for new functionality

**Documentation:** 
* Added new `mode` parameter to filelogreceiver docs

* [pkg/traslator] move skywalking_to_traces into pkg/translator (open-telemetry#28814)

**Description:** A part of
open-telemetry#28693
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

move`skywalking_to_traces` in `skywalkingreceiver` into
`pkg/translator/skywalking`

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>

**Documentation:** <Describe the documentation added.>

---------

Signed-off-by: Jared Tan <jian.tan@daocloud.io>

* [chore][pkg/stanza] Adjust length of knownFiles based on number of matches (open-telemetry#28646)

Follows
open-telemetry#28493

This adjusts the length of `knownFiles` to be roughly 4x the number of
matches per poll cycle. In other words, we will remember files for up to
4 poll cycles.

Resolves
open-telemetry#28567

* [chore][exporter/datadog] Add a section about how to switch to Zorkian (open-telemetry#28836)

**Description:** 
Update README about disabling the feature gate of native metric client
and falling back to Zorkian client.

---------

Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>

* [cmd/telemetrygen] Use exporter per worker for better metrics throughput (open-telemetry#27201)

Adding a feature - Use exporter per worker for better metrics throughput

Initially when adding more workers in the telemetrygen config when
running "metrics" it did not increase the metrics throughput since all
workers used the same exporter.

By creating one exporter per worker we can now increase the number of
metrics being send to the backend.

Fixes open-telemetry#26709

- Units tests pass
- Ran local load tests with different configurations

## Before code change

Generate metrics:

```
telemetrygen metrics \
    --metric-type Sum \
    --duration "60s" \
    --rate "0" \
    --workers "10" \
    --otlp-http=false \
    --otlp-endpoint <HOSTNAME> \
    --otlp-attributes "service.name"=\"telemetrygen\"
```

Output:
```
metrics generated	{"worker": 8, "metrics": 139}
metrics generated	{"worker": 0, "metrics": 139}
metrics generated	{"worker": 9, "metrics": 141}
metrics generated	{"worker": 4, "metrics": 140}
metrics generated	{"worker": 2, "metrics": 140}
metrics generated	{"worker": 3, "metrics": 140}
metrics generated	{"worker": 7, "metrics": 140}
metrics generated	{"worker": 5, "metrics": 140}
metrics generated	{"worker": 1, "metrics": 140}
metrics generated	{"worker": 6, "metrics": 140}
```

## After code change

```
telemetrygen metrics \
    --metric-type Sum \
    --duration "60s" \
    --rate "0" \
    --workers "10" \
    --otlp-http=false \
    --otlp-endpoint <HOSTNAME> \
    --otlp-attributes "service.name"=\"telemetrygen\"
```

Output:

```
metrics generated	{"worker": 6, "metrics": 1292}
metrics generated	{"worker": 3, "metrics": 1277}
metrics generated	{"worker": 5, "metrics": 1272}
metrics generated	{"worker": 8, "metrics": 1251}
metrics generated	{"worker": 9, "metrics": 1241}
metrics generated	{"worker": 4, "metrics": 1227}
metrics generated	{"worker": 0, "metrics": 1212}
metrics generated	{"worker": 2, "metrics": 1201}
metrics generated	{"worker": 1, "metrics": 1333}
metrics generated	{"worker": 7, "metrics": 1363}
```

By adding more workers you can now export more metrics and use
`telemetrygen` better for load testing use cases.

With the code change I can now utilize my CPU better for load tests.
When adding 200 workers to the above config the CPU usage can go above
80%. Before that CPU usage would be around 1% with 200 workers.


![image](https://github.com/open-telemetry/opentelemetry-collector-contrib/assets/558256/66727e5f-6b0a-44a3-8436-7e6985d6a01c)

---------

Co-authored-by: Alex Boten <aboten@lightstep.com>

* [scraper/processscraper] Fix TestScrapeMetrics_MuteErrorFlags failures on windows and darwin (open-telemetry#28864)

**Description:** 

There were some issues related to how `mock.On` works. With default mock
and addition `On` which is already present it appends to a list and
won't be called as one instance of a method is already there. So some
expectations regarding return values were not met

Metrics count for darwin is 3 because disk io is disabled
[here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/f509060a8d1ab5ca4b5827e0c60d1149e3059908/receiver/hostmetricsreceiver/internal/scraper/processscraper/process_scraper.go#L315)

Tested locally on mac, windows 11 and ubuntu 22

**Link to tracking Issue:** open-telemetry#28828

* [chore][testbed] Do not use export function `carbonreceiver.New` (open-telemetry#28858)

**Description:**
Do not use export function `carbonreceiver.New` and replace with
`factory.CreateMetricsReceiver`, then we can chore carbonreceiver to
make it pass checkapi tool.

**Link to tracking Issue:**

open-telemetry#28857

* [chore] Run make gendependabot (open-telemetry#28868)

To fix failing `build-and-test / checks` CI job

* [chore] update codeowners (open-telemetry#28869)

Run `make gengithub` locally.

* [receiver/sshcheck] Change keyfile -> key_file in e.g. config and docs (open-telemetry#28834)

`keyfile` was the key used in config and documented in sshcheck, but
`key_file` is the preferred key for these purposes.

**Link to tracking Issue:** open-telemetry#27035 

**Testing:** Update tests to ensure this key is used in default.

**Documentation:** Updated documentation to reflect the change in key.

* [chore] [extension/jaegerremotesampling] Avoid port conflict in tests (open-telemetry#28874)

Fixes
open-telemetry#28873

* [exporter/azuremonitor] Add Connection String Support to Azure Monitor Exporter (open-telemetry#28854)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This pull request introduces the ability to configure the Azure Monitor
Exporter using a connection string, aligning the exporter configuration
with Azure Monitor's recommended practices. The current implementation
requires users to set the instrumentation key directly, which will soon
be deprecated in favor of using the connection string.

**Changes Made:**

1. Configuration Update: Modified the `Config` struct and related
configuration parsing logic to support a `ConnectionString` field.
2. Parsing Logic: Implemented functionality to parse the connection
string and extract necessary details, such as `InstrumentationKey` and
`IngestionEndpoint`.
3. Updated Tests: Revised existing tests and added new ones to ensure
coverage of the new configuration option.

**Benefits:**

* Streamlines the configuration process for end-users.
* Aligns with Azure Monitor's best practices and recommended
configuration approach.
* Paves the way for the upcoming deprecation of direct instrumentation
key configuration.

**Backwards Compatibility:**
This update maintains full backwards compatibility. Users currently
utilizing the instrumentation key for configuration can continue to do
so but are advised to transition to using the connection string.

**To-Do** 

* Documentation Update in a follow up PR
* Deprecation Notice: A future update will introduce a deprecation
warning for users still configuring the exporter with the
instrumentation key, encouraging them to switch to using a connection
string.
* Add support for `EndpointSuffix` in connection string -
https://learn.microsoft.com/en-us/azure/azure-monitor/app/sdk-connection-string?tabs=dotnet5#connection-string-with-an-endpoint-suffix

**Link to tracking Issue:** <Issue number if applicable> 
open-telemetry#28853

**Testing:** <Describe what testing was performed and which tests were
added.>

Conducted comprehensive testing, including unit tests, to validate that
the new configuration option works as expected and does not introduce
regressions. All tests are currently passing.

```
[Wed Nov  1 12:53:42 PDT 2023] --------- Transmitting 27 items ---------
[Wed Nov  1 12:53:43 PDT 2023] Telemetry transmitted in 331.926261ms
[Wed Nov  1 12:53:43 PDT 2023] Response: 200
[Wed Nov  1 12:53:43 PDT 2023] Items accepted/received: 27/27
[Wed Nov  1 12:53:53 PDT 2023] --------- Transmitting 30 items ---------
[Wed Nov  1 12:53:53 PDT 2023] Telemetry transmitted in 73.171392ms
[Wed Nov  1 12:53:53 PDT 2023] Response: 200
[Wed Nov  1 12:53:53 PDT 2023] Items accepted/received: 30/30
[Wed Nov  1 12:54:04 PDT 2023] --------- Transmitting 27 items ---------
[Wed Nov  1 12:54:04 PDT 2023] Telemetry transmitted in 68.037724ms
[Wed Nov  1 12:54:04 PDT 2023] Response: 200
[Wed Nov  1 12:54:04 PDT 2023] Items accepted/received: 27/27
```

**Documentation:** <Describe the documentation added.>

TODO, in a follow up PR.

* [exporter/awsxray] Add aws sdk http error events to x-ray subsegment and strip prefix `AWS.SDK.` from aws remote service name (open-telemetry#27232)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

- Convert individual HTTP error events into exceptions within
subsegments for AWS SDK spans.
- Normalize the service name from `awsxray.AWSServiceAttribute`
attribute by removing the `AWS.SDK.` prefix (in some aws sdk
instrumentation, we have added the prefix to produce metrics with the
prefix to clearly indicate the resource). This change ensures that X-Ray
backend recognizes standard service names like "DynamoDb", "S3", etc.,
enabling correct identification of AWS service types.


**Link to tracking Issue:** 
NA

**Testing:** 
Unit tests are added.

**Documentation:**
NA

---------

Co-authored-by: John Knollmeyer <jknollm@amazon.com>
Co-authored-by: John Knollmeyer <jaknollmeyer@gmail.com>

* [receiver/carbon] do not expose method (open-telemetry#28872)

Do not export function `New` and pass checkapi.

open-telemetry#26304

Signed-off-by: sakulali <sakulali@126.com>

* [chore] update testbed to embed jaeger exporter (open-telemetry#28880)

Rather than importing a deprecated module, this embeds the contents of
that module in the testbed. Part of open-telemetry#28647

Signed-off-by: Alex Boten <aboten@lightstep.com>

* Make replication stats return whole number (open-telemetry#28824)

**Description:** 

I failed to reproduce []uint8 to int64 conversion but I was able to
repro float64 to int64 conversion error.
Different types may be due to different versions or values reported. 

The fix is forcing query to retrieve integer values. While this may seem
like most obvious fix I'm not really aligned with it.

What query is returning for is a lag as a decimal number (whole part is
seconds) by forcing this to return just an int we kind of losing
precision. `0.4s` are reported as `0` while it is `400ms`.

My proposal here would consists of 2 options.
First one is change reporting in a way that what we report is in fact
time-span in `ms`. This could most likely be considered breaking.

Second option (I'm more in favor of) is to change the type of what is
reported (from int to float). This way unit is intact and does not break
possible visualizations, but we gain precision and won't lose data.

My first issue here so I wanted to get some feedback first before
publishing something unreasonable.

_EDIT_

Went with the option of deprecating metrics with second precision (still
fixing conversion failures) and introducing alternative to these metrics
with `_ms` suffix in name and millisecond precision.

Old metrics are now behind a featuregate which is enabled by default for
now.

**Link to tracking Issue:** open-telemetry#26714 

**Testing:** 
Setting up replicated postgres instances and testing method against this
deployment.

**Documentation:** -

---------

Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>

* Retract googlecloud exporter releases that don't have logging (open-telemetry#28884)

**Description:**

Logging was broken after
open-telemetry#25900
(released in v0.84.0). It is fixed by
open-telemetry/opentelemetry-collector#8792,
which will be released in v0.89.0. This will help with any distributions
that include the googlecloud exporter components.

* [chore] move collectdreceiver shared code to an internal package (open-telemetry#28856)

This allows the collectdreceiver to pass checkapi.

* [chore] Increase Cache Go step timeout to 25min on Windows (open-telemetry#28859)

**Description:**
Increase the timeout of the "Cache Go" step in the
`build-and-test-windows` workflow. I had a few failures with that today
and glancing at the errors for the workflow I can see a few others.

Few instances below:
*
https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/6722644168/job/18271035294#step:5:22
*
https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/6725656509/job/18280490403#step:5:23
*
https://github.com/open-telemetry/opentelemetry-collector-contrib/actions/runs/6726302253/job/18282301386#step:5:21

* [exporter/datadog] fix(docs): typo with especially (open-telemetry#28996)

* Bump github.com/google/cadvisor from 0.47.3 to 0.48.1 in /receiver/awscontainerinsightreceiver (open-telemetry#28998)

Second attempt after dependabot's PR open-telemetry#28974. There was a typo fixed in
cadvisor `v0.48.1` that was a breaking change for us. This updates all
references to correct spelling of `housekeeping`.

Fixes open-telemetry#28995

* [receiver/kafkametrics] Using unique container networks and container names and attempt to fix flaky tests (open-telemetry#28903)

**Description:** <Describe what has changed.>
Using unique container networks and container names and attempt to fix
flaky tests

**Link to tracking Issue:**

open-telemetry#26293

**Testing:**
**Preparation:** 
    DIR = receiver/kafkametricsreceiver
CMD = go test -v -count=1 -race -timeout 360s -parallel 4
-tags=integration,"" -run=Integration ./...

**Tests:**
1. If we manually modify the code(as shown below) and use invalid kafka
broker, such as `localhost:invalid-port`, the same error as shown in the
issue may occur.
    ```
    // receiver/kafkametricsreceiver/integration_test.go
    scraperinttest.WithCustomConfig(
func(t *testing.T, cfg component.Config, ci
*scraperinttest.ContainerInfo) {
            rCfg := cfg.(*Config)
            rCfg.CollectionInterval = 5 * time.Second
            rCfg.Brokers = []string{"localhost:invalid-port"}
            rCfg.Scrapers = []string{"brokers", "consumers", "topics"}
        }),
    ```

2. If we execute the test commands **sequentially** , it seems that the
execution results are all correct.
    ```
    # all result are correct
    for i in {1..100}; do echo "Run $i"; ./${CMD} ; done
    ```

3. If we execute the commands in **parallel** end with **`&`**,
sometimes the error shown in the issue may occur.
    ```
    # sometimes result occur error
    for i in {1..20}; do echo "Run $i"; ./${CMD} &; done
    ```

**Inference:**
I have found that duplicate container networks and container names can
cause container creation to fail or result in successfully created
containers with unavailable ports, which may lead to issues similar to
the one shown.

**Additional information:** 
Since Kafka's startup relies on ZooKeeper (which waits for the default
`zookeeper.connection.timeout.ms=18000`), if Kafka starts first and
ZooKeeper fails to start properly after the timeout duration, it will
cause the Kafka container to fail to start correctly. I found the issue
testcontainers/testcontainers-go#1791 wants to
support that.

**Documentation:**

---------

Signed-off-by: sakulali <sakulali@126.com>

* [chore][processort/tailsamplingprocessor] Limit concurrency for certain tests (flay test on Windows runners) (open-telemetry#29014)

**Description:**
Limit number of goroutines started during
`processor/tailsamplingprocessor` tests. This causes very frequently
failures on the Windows tests, see
[here](open-telemetry#28682 (comment))
for example.

The issue is that the race detector has a hard limit on number of
goroutines, see golang/go#23611. The fix
limits the concurrency in two tests so this limit is not hit on GH
Windows runners.

**Link to tracking Issue:** 
Fix open-telemetry#9126

**Testing:**
Increased the concurrency on the two changed tests caused the error and
validated that it passed twice on my fork.

**Documentation:**
N/A

* Codesmon/exporter/azuremonitor/persistent queue (open-telemetry#26258)

Description:
Added a new config item to support the QueueSettings values.
Extended the exportHelper.New[Metrics|Logs|Traces]Exporter call to pass
in the QueueSettings config, thus enabling persistent_queue for this
exporter.

Link to tracking Issue:
Fixes issue
open-telemetry#25859

Testing:
Extending unit tests to check configuration changes are picked up.

Documentation:
Added sending_queue config items to README.md's configuration section.

* [chore] update affiliation (open-telemetry#29019)

Updated to match core

* [receiver/collectd] move collectdreceiver to beta (open-telemetry#28997)

Promote collectdreceiver as beta component

Fixes open-telemetry#28658

* [chore] dependabot updates Wed Nov  8 16:58:54 UTC 2023 (open-telemetry#29028)

Bump github.com/DataDog/datadog-agent/pkg/proto from 0.49.0-rc.2 to
0.50.0-devel in /exporter/datadogexporter
Bump github.com/IBM/sarama from 1.41.3 to 1.42.0 in
/exporter/kafkaexporter
Bump github.com/IBM/sarama from 1.41.3 to 1.42.0 in
/receiver/kafkareceiver
Bump github.com/IBM/sarama from 1.41.3 to 1.42.1 in
/receiver/kafkametricsreceiver
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/exporter/awscloudwatchlogsexporter
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/exporter/awsemfexporter
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/exporter/awsxrayexporter
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/exporter/datadogexporter
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/extension/observer/ecsobserver
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/aws/awsutil
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/aws/cwlogs
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/aws/k8s
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/aws/proxy
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/aws/xray
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/aws/xray/testdata/sampleapp
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/internal/metadataproviders
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/processor/resourcedetectionprocessor
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/receiver/awsecscontainermetricsreceiver
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.3 in
/receiver/awsxrayreceiver
Bump github.com/aws/aws-sdk-go from 1.46.7 to 1.47.4 in
/receiver/awscontainerinsightreceiver
Bump github.com/aws/aws-sdk-go-v2 from 1.21.2 to 1.22.1 in
/exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2 from 1.21.2 to 1.22.1 in
/extension/sigv4authextension
Bump github.com/aws/aws-sdk-go-v2/config from 1.19.1 to 1.22.0 in
/exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2/config from 1.19.1 to 1.22.0 in
/extension/sigv4authextension
Bump github.com/aws/aws-sdk-go-v2/credentials from 1.13.43 to 1.15.1 in
/exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2/credentials from 1.13.43 to 1.15.1 in
/extension/sigv4authextension
Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.20.0 to 1.22.0
in /exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.23.2 to 1.25.0 in
/exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.23.2 to 1.25.0 in
/extension/sigv4authextension
Bump github.com/golangci/golangci-lint from 1.55.1 to 1.55.2 in
/internal/tools
Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in
/receiver/jaegerreceiver
Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in
/receiver/sapmreceiver
Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in
/receiver/signalfxreceiver
Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in
/receiver/skywalkingreceiver
Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in
/receiver/splunkhecreceiver
Bump github.com/gorilla/mux from 1.8.0 to 1.8.1 in
/testbed/mockdatareceivers/mockawsxrayreceiver
Bump github.com/influxdata/influxdb-client-go/v2 from 2.12.3 to 2.12.4
in /receiver/influxdbreceiver
Bump github.com/mattn/go-sqlite3 from 1.14.17 to 1.14.18 in
/extension/storage
Bump github.com/prometheus/procfs from 0.11.1 to 0.12.0 in
/receiver/hostmetricsreceiver
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in
/exporter/signalfxexporter
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in
/extension/observer/hostobserver
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in
/processor/resourcedetectionprocessor
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in
/receiver/awscontainerinsightreceiver
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in
/receiver/hostmetricsreceiver
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in
/receiver/jmxreceiver
Bump github.com/shirou/gopsutil/v3 from 3.23.9 to 3.23.10 in /testbed
Bump github.com/spf13/cobra from 1.7.0 to 1.8.0 in /cmd/telemetrygen
Bump github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common
from 1.0.777 to 1.0.782 in /exporter/tencentcloudlogserviceexporter
Bump go.mongodb.org/atlas from 0.34.0 to 0.35.0 in
/receiver/mongodbatlasreceiver
Bump golang.org/x/mod from 0.13.0 to 0.14.0 in /cmd/configschema
Bump golang.org/x/sys from 0.13.0 to 0.14.0 in
/exporter/signalfxexporter
Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /pkg/stanza
Bump golang.org/x/sys from 0.13.0 to 0.14.0 in /pkg/winperfcounters
Bump golang.org/x/sys from 0.13.0 to 0.14.0 in
/receiver/hostmetricsreceiver
Bump golang.org/x/sys from 0.13.0 to 0.14.0 in
/receiver/windowseventlogreceiver
Bump golang.org/x/text from 0.13.0 to 0.14.0 in /cmd/configschema
Bump golang.org/x/text from 0.13.0 to 0.14.0 in /cmd/mdatagen
Bump golang.org/x/text from 0.13.0 to 0.14.0 in /internal/coreinternal
Bump golang.org/x/text from 0.13.0 to 0.14.0 in /pkg/stanza
Bump golang.org/x/text from 0.13.0 to 0.14.0 in /testbed
Bump golang.org/x/time from 0.3.0 to 0.4.0 in /cmd/telemetrygen
Bump google.golang.org/api from 0.148.0 to 0.149.0 in
/exporter/f5cloudexporter
Bump google.golang.org/api from 0.148.0 to 0.149.0 in
/exporter/googlecloudpubsubexporter
Bump google.golang.org/api from 0.148.0 to 0.149.0 in
/receiver/googlecloudpubsubreceiver
Bump google.golang.org/api from 0.148.0 to 0.149.0 in
/receiver/googlecloudspannerreceiver

* [chore] dependabot updates Wed Nov  8 18:29:02 UTC 2023 (open-telemetry#29052)

Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/awscloudwatchlogsexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/awsemfexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/awsxrayexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/exporter/datadogexporter
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/extension/observer/ecsobserver
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/awsutil
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/cwlogs
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/k8s
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/proxy
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/xray
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/aws/xray/testdata/sampleapp
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/internal/metadataproviders
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/processor/resourcedetectionprocessor
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/receiver/awscontainerinsightreceiver
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/receiver/awsecscontainermetricsreceiver
Bump github.com/aws/aws-sdk-go from 1.47.4 to 1.47.5 in
/receiver/awsxrayreceiver
Bump github.com/aws/aws-sdk-go-v2/config from 1.22.0 to 1.22.2 in
/exporter/awskinesisexporter
Bump github.com/aws/aws-sdk-go-v2/config from 1.22.0 to 1.22.2 in
/extension/sigv4authextension
Bump github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common
from 1.0.782 to 1.0.786 in /exporter/tencentcloudlogserviceexporter
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/exporter/f5cloudexporter
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/exporter/googlecloudpubsubexporter
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/receiver/googlecloudpubsubreceiver
Bump google.golang.org/api from 0.149.0 to 0.150.0 in
/receiver/googlecloudspannerreceiver

* [chore] dependabot updates Wed Nov  8 21:01:03 UTC 2023 (open-telemetry#29071)

Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/awscloudwatchlogsexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/awsemfexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/awsxrayexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/exporter/datadogexporter
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/extension/observer/ecsobserver
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/awsutil
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/cwlogs
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/k8s
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/proxy
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/xray
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/aws/xray/testdata/sampleapp
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/internal/metadataproviders
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/processor/resourcedetectionprocessor
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/receiver/awscontainerinsightreceiver
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/receiver/awsecscontainermetricsreceiver
Bump github.com/aws/aws-sdk-go from 1.47.5 to 1.47.6 in
/receiver/awsxrayreceiver

* [exporter/influxdb] Remove //nolint indent-error-flow (open-telemetry#29073)

I fixed linter issue by following this document.
https://google.github.io/styleguide/go/decisions.html#indent-error-flow

* hostmetricsreceiver: remove unused function (open-telemetry#29075)

**Description:**
`gopsutil` recently added the capability to pass environment vars
through context. This is now done everywhere. This environment variable
setting function is no longer used or necessary. This PR removes it.

**Link to tracking Issue:** open-telemetry#23055

Signed-off-by: Braydon Kains <braydonk@google.com>

* [chore] bump go versions in workflows to 1.20.11 and 1.21.4 (open-telemetry#29080)

This fixes security vulnerabilities found via govulncheck in the
standard library when running against the previous patch versions of
golang. While these vulnerabilities don't actually present themselves in
the binary, the workflows when running govuln check fail and thus taking
in the latest patches fix the issue.


Testing gets caught in workflow run. Noticed the issue originally when
running workflows on this pr:
open-telemetry#28885

* [all][chore] Moved from interface{} to any for all go code (open-telemetry#29072)

Additionally added a golangci-lint.yaml update to automatically apply
this change to new code going forward

Fixes open-telemetry#23811

---------

Co-authored-by: Alex Boten <aboten@lightstep.com>

* [receiver/dockerstats] rename struct and function to keep expected receiver.Factory and pass checkapi (open-telemetry#27086)

Rename struct and function to keep expected receiver.Factory and pass
checkapi

open-telemetry#26304

go run cmd/checkapi/main.go .
go test for dockerstatsreceiver

Signed-off-by: sakulali <sakulali@126.com>

* [receiver/mongodbatlasreceiver] add provider resource attributes (open-telemetry#28835)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This feature adds provider resource attributes
`mongodb_atlas.provider.name` and `mongodb_atlas.region.name` to add
additional context and filtering capabilities.

**Link to tracking Issue:** <Issue number if applicable>
open-telemetry#28833 

**Testing:** <Describe what testing was performed and which tests were
added.>
Test were automatically updated. Live testing was performed and
validated on clusters.
**Documentation:** <Describe the documentation added.>
Docs were automatically updated.

* [exporter/syslog] Enable component (open-telemetry#28902)

**Description:** Promote syslogexporter to alpha and add it to
otelcontribcol

**Link to tracking Issue:**  related to: open-telemetry#21242, open-telemetry#21244, open-telemetry#21245

**Testing:** <Describe what testing was performed and which tests were
added.>
Manual tests:
Configuration:
```yaml
exporters:
  syslog:
    network: tcp
    port: 514
    endpoint: 127.0.0.1
    protocol: rfc5424

receivers:
  filelog:
    start_at: beginning
    include:
    - /Users/kkujawa/git/opentelemetry-collector-contrib/test.txt
    operators:
      - type: syslog_parser
        protocol: rfc5424

service:
  pipelines:
    logs:
      receivers:
        - filelog
      exporters:
        - syslog
```

Logs:
```
 ./bin/otelcontribcol_darwin_amd64 --config /Users/kkujawa/git/opentelemetry-collector-contrib/bin/config.yaml 
2023-11-06T12:59:31.656+0100    info    service@v0.88.1-0.20231026220224-6405e152a2d9/telemetry.go:84   Setting up own telemetry...
2023-11-06T12:59:31.656+0100    info    service@v0.88.1-0.20231026220224-6405e152a2d9/telemetry.go:201  Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
2023-11-06T12:59:31.656+0100    info    exporter@v0.88.1-0.20231026220224-6405e152a2d9/exporter.go:275  Development component. May change in the future.        {"kind": "exporter", "data_type": "logs", "name": "syslog"}
2023-11-06T12:59:31.656+0100    info    syslogexporter@v0.88.0/exporter.go:42   Syslog Exporter configured      {"kind": "exporter", "data_type": "logs", "name": "syslog", "endpoint": "127.0.0.1", "Protocol": "rfc5424", "port": 514}
2023-11-06T12:59:31.657+0100    info    service@v0.88.1-0.20231026220224-6405e152a2d9/service.go:143    Starting otelcontribcol...      {"Version": "0.88.0-dev", "NumCPU": 16}
2023-11-06T12:59:31.657+0100    info    extensions/extensions.go:33     Starting extensions...
2023-11-06T12:59:31.657+0100    info    adapter/receiver.go:45  Starting stanza receiver        {"kind": "receiver", "name": "filelog", "data_type": "logs"}
2023-11-06T12:59:31.657+0100    info    service@v0.88.1-0.20231026220224-6405e152a2d9/service.go:169    Everything is ready. Begin running and processing data.
2023-11-06T12:59:31.858+0100    info    fileconsumer/file.go:263        Started watching file   {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/Users/kkujawa/git/opentelemetry-collector-contrib/test.txt"}
```

* [chore][pkg/stanza]: when found duplicate, continue from outer loop (open-telemetry#28889)

**Description:** 
Fix a bug when duplicate readers are added to the active list even after
the underlying file is closed. To fix this, continue from the outer
loop.
This doesn't result in any duplicates, but this will keep producing the
following annoying error every time.
```2023-11-05T02:34:03.530+0530       ERROR       Failed to seek  {"component": "fileconsumer", "path": "/var/folders/fs/njj5c3xx7vdcsr28n19vykw00000gn/T/TestStalePartialFingerprintDiscarded2443925830/001/1616317274.log2", "error": "seek /var/folders/fs/njj5c3xx7vdcsr28n19vykw00000gn/T/TestStalePartialFingerprintDiscarded2443925830/001/1616317274.log2: file already closed"}```

**Testing:** Update the test to check the previouPollFiles

* udp-receiver async - fix data corruption (with buffer pools) (open-telemetry#28898)

**Description:** Fixing a bug in udp async mode only (but didn't affect
the default non-async mode).
Udp-receiver reuses the same buffer when each packet is processed.
While that's working fine when running it without async config, it cause
a significant amount of duplicate packets and corrupted packets being
sent downstream.
The reader-async thread is reading a packet from the udp port into the
buffer, places that buffer in the channel, and reads another packet into
the same buffer and pushes it to the channel.
Let's say that the processor-async thread was a bit slow, so it only
tries to read from the channel after the 2 items were placed in the
channel. In that case, the processor thread will read 2 items from the
channel, but it will be the same 2nd packet (since the 1st one was
overwritten). In some cases, it seems the processor is reading a
corrupted buffer (since the reader is currently writing into it).
We can't fix it by having the reader allocate a new buffer before each
time it reads a packet from the udp port, since that hurts performance
significantly (reducing it up to ~50%). Instead, use a pool so the
buffers are reused.
Before reading a packet, the reader get a buffer from the pool. The
processor returns it back to the pool after it has been successfully
processed

**Link to tracking Issue:** 27613

**Testing:** Ran existing unitests. 
Ran ran stress tests (sending 250k udp packets per second)
duplicate/corruption issue didn't happen; performance wasn't hurt.

**Documentation:** None

* [chore][receiver/windowseventlog] remove duplicate function NewFactory and pass checkapi (open-telemetry#29020)

**Description:**
Remove duplicate function NewFactory and pass checkapi.

**Link to tracking Issue:**

open-telemetry#26304

**Testing:**
go run cmd/checkapi/main.go .
go test for windowseventlogreceiver

**Documentation:**

Signed-off-by: sakulali <sakulali@126.com>

* [chore][pkg/stanza][exporter/signalfx] One more interface{} -> any and skip flaky tests (open-telemetry#29101)

See
https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/28898/files#r1389614720.
Looks like a merge conflict.

* [chore][CONTRIBUTING.md] Add triage process link (open-telemetry#29092)

The `needs triage` label is directly related to how we define triaging.
Added a link to the triaging definition to make the label's usage more
clear. (Even though the triaging process paragraph is just above this
table in the document, it's easy to miss).

* fix(processor/k8sattributes): README was misleading/had typoes (open-telemetry#29108)

**Description:**
Fixes misleading documentation about which RBAC role is required and
other invalid YAML I found along the way

* [processor/k8sattributes] fix(docs): typo for kubernetes label (open-telemetry#29110)

**Description:** typo for kubernetes label in k8sattributesprocessor

**Link to tracking Issue:** n/a

**Testing:** n/a docs

**Documentation:** n/a

* Update doc.go of filelogreceiver (open-telemetry#29100)

* [connector/datadog] Set MutatesData to true (open-telemetry#29114)

**Description:** 
Mark datadogconnector as `MutatesData` to prevent data race

**Link to tracking Issue:**
Fixes
open-telemetry#29111

* cmd/telemetrygen: add HTTP export for logs (open-telemetry#29078)

**Description:**

Closes
open-telemetry#18867

**Testing:**

Ran opentelemetry-collector locally with debug exporter, then used
telemetrygen with `--otlp-http` with and without `--otlp-insecure`.

**Documentation:** None

---------

Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jared Tan <jian.tan@daocloud.io>
Signed-off-by: sakulali <sakulali@126.com>
Signed-off-by: Alex Boten <aboten@lightstep.com>
Signed-off-by: Braydon Kains <braydonk@google.com>
Co-authored-by: Antoine Toulme <antoine@lunar-ocean.com>
Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
Co-authored-by: Yang Song <songy23@users.noreply.github.com>
Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
Co-authored-by: Paulo Janotti <pjanotti@splunk.com>
Co-authored-by: Chris Mark <chrismarkou92@gmail.com>
Co-authored-by: VihasMakwana <121151420+VihasMakwana@users.noreply.github.com>
Co-authored-by: Marc Tudurí <marctc@protonmail.com>
Co-authored-by: Eason Lau <liubey1214@gmail.com>
Co-authored-by: Abhishek <abhishek@abhishekkothari.in>
Co-authored-by: Gabriel Aszalos <gabriel.aszalos@gmail.com>
Co-authored-by: Andrzej Stencel <astencel@sumologic.com>
Co-authored-by: Joonsoo Park <joonsoo181005@gmail.com>
Co-authored-by: sakulali <sakulali@126.com>
Co-authored-by: aishyandapalli <ayandapalli@ebay.com>
Co-authored-by: mcube8 <V.Madhumita.Malvika@morganstanley.com>
Co-authored-by: Juraci Paixão Kröhling <juraci@kroehling.de>
Co-authored-by: Garry Cairns <2401853+garry-cairns@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Punya Biswal <punya@google.com>
Co-authored-by: Brandon Johnson <brandon.johnson@bluemedora.com>
Co-authored-by: Jared Tan <jian.tan@daocloud.io>
Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
Co-authored-by: Marcel Birkner <marcel.birkner@dash0.com>
Co-authored-by: Alex Boten <aboten@lightstep.com>
Co-authored-by: Michal Pristas <michal.pristas@gmail.com>
Co-authored-by: Nathan Slaughter <28688390+nslaughter@users.noreply.github.com>
Co-authored-by: Rajkumar Rangaraj <rajrang@microsoft.com>
Co-authored-by: Ping Xiang <64551395+pxaws@users.noreply.github.com>
Co-authored-by: John Knollmeyer <jknollm@amazon.com>
Co-authored-by: John Knollmeyer <jaknollmeyer@gmail.com>
Co-authored-by: David Ashpole <dashpole@google.com>
Co-authored-by: Karming <41309630+karmingc@users.noreply.github.com>
Co-authored-by: Curtis Robert <crobert@splunk.com>
Co-authored-by: Colin Desmond <colin.desmond@microsoft.com>
Co-authored-by: OpenTelemetry Bot <107717825+opentelemetrybot@users.noreply.github.com>
Co-authored-by: Yuki Nakamura <yuki.nakamura@mapbox.com>
Co-authored-by: Braydon Kains <93549768+braydonk@users.noreply.github.com>
Co-authored-by: Adriel Perkins <adriel@adrielperkins.com>
Co-authored-by: lucasoskorep <lucas.oskorep@gmail.com>
Co-authored-by: Jon <jonathan.wamsley@bluemedora.com>
Co-authored-by: Katarzyna Kujawa <73836361+kkujawa-sumo@users.noreply.github.com>
Co-authored-by: hovavza <147598197+hovavza@users.noreply.github.com>
Co-authored-by: Liz Fong-Jones <elizabeth@ctyalcove.org>
Co-authored-by: Yoshi Yamaguchi <yoshifumi@google.com>
Co-authored-by: Andrew Wilkins <axw@elastic.co>
RoryCrispin pushed a commit to ClickHouse/opentelemetry-collector-contrib that referenced this issue Nov 24, 2023
…en-telemetry#28688)

**Description:** We should have explicit interfaces for the encoding
extensions, which should be used by the receivers/exporters instead of
marshallers and unmarshallers

**Link to tracking Issue:**
open-telemetry#28686
RoryCrispin pushed a commit to ClickHouse/opentelemetry-collector-contrib that referenced this issue Nov 24, 2023
…8689)

**Description:** Fix bug when err is nil if an invalid version value is
supplied.

**Link to tracking Issue:** open-telemetry#28686

---------

Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
Copy link
Contributor

github-actions bot commented Jan 1, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Jan 1, 2024
dmitryax pushed a commit that referenced this issue Jan 8, 2024
#28683)

**Description:** 
This PR does the following:
- Test cases for all the known encoding extensions
- Add extensions to builder and update components.go
- Run `make crosslink`

I believe we can use this extension for `pulsarreceiver` for now and see
how it performs in production. I will raise a follow-up PR after this
one.

**Link to tracking Issue:**
#28686

---------

Co-authored-by: Antoine Toulme <antoine@lunar-ocean.com>
dmitryax pushed a commit that referenced this issue Jan 8, 2024
**Description:** Add support for JSON protocol for Jaeger codec. 

**Link to tracking Issue:**
[#6272](#28686)
cparkins pushed a commit to AmadeusITGroup/opentelemetry-collector-contrib that referenced this issue Jan 10, 2024
open-telemetry#28683)

**Description:** 
This PR does the following:
- Test cases for all the known encoding extensions
- Add extensions to builder and update components.go
- Run `make crosslink`

I believe we can use this extension for `pulsarreceiver` for now and see
how it performs in production. I will raise a follow-up PR after this
one.

**Link to tracking Issue:**
open-telemetry#28686

---------

Co-authored-by: Antoine Toulme <antoine@lunar-ocean.com>
cparkins pushed a commit to AmadeusITGroup/opentelemetry-collector-contrib that referenced this issue Jan 10, 2024
**Description:** Add support for JSON protocol for Jaeger codec. 

**Link to tracking Issue:**
[open-telemetry#6272](open-telemetry#28686)
Copy link
Contributor

github-actions bot commented Mar 1, 2024

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants