Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fluentbit OpenTelemetry Output Pipeline/Plugin doesnt format fields properly #8359

Closed
cb645j opened this issue Jan 8, 2024 · 37 comments
Closed

Comments

@cb645j
Copy link
Contributor

cb645j commented Jan 8, 2024

Issue:
When using the OpenTelemetry Output Pipeline/Plugin to send logs to an opentelemetry endpoint, the output json/payload/fields are not formatted correctly. They should be formatted according to the opentelemetry specifications. As a result, opentelemetry is unable to process the request, from fluentbit, correctly.

Fluentbit Logs showing logs being sent to opentelemtry endpoint:

[36] ebiz: [[1704434606.150000000, {}], {"message"=>"This is a test log message for abc application", "loglevel"=>"INFO", "service"=>"helloworld", "clientIp"=>"111.222.888", "timestamp"=>"2024-01-08 18:37:08.150", "testtag"=>"fluentbit", "trace_id"=>"7ada6c95a1bd243fa9013cab515173a9", "span_id"=>"9c1544cc4f7ff369"}]
[2024/01/08 18:37:10] [debug] [upstream] proxy returned 200
[2024/01/08 18:37:10] [debug] [http_client] flb_http_client_proxy_connect connection #32 connected to myproxy.com:8080.
[2024/01/08 18:37:10] [debug] [upstream] proxy returned 200
[2024/01/08 18:37:10] [debug] [http_client] flb_http_client_proxy_connect connection #31 connected to myproxy.com:8080.
[2024/01/08 18:37:10] [debug] [upstream] KA connection #32 to myproxy.com:8080 is connected
[2024/01/08 18:37:10] [debug] [http_client] not using http_proxy for header
[2024/01/08 18:37:10] [debug] [upstream] KA connection #31 to myproxy.com:8080 is connected
[2024/01/08 18:37:10] [debug] [http_client] not using http_proxy for header
[2024/01/08 18:37:10] [ info] [output:opentelemetry:opentelemetry.1] ingest.privateotel.com:443, HTTP status=200

My fluentbit configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentbit
  namespace: otel
data:
 apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentbit
  namespace: otel
data:
  custom_parsers.conf: |
        
    [MULTILINE_PARSER]
        Name          multiline-rules
        Type          regex
        Flush_timeout 2000
        rule      "start_state"   "/(\d{4}-\d{1,2}-\d{1,2})(.*)/"  "cont"
        rule      "cont"          "/^\D(.*)/"                     "cont"

    [PARSER]
        Name named-captures
        Format regex
        Regex /(?<timestamp>[^ ]* .*):(?<loglevel>DEBUG|ERROR|INFO)([\s\s]*)-\|(?<id>[\w\-]*)\|(?<clientIp>[0-9\.]*)\|(?<trace_id>[0-9A-Za-z]*)\|(?<span_id>[0-9A-Za-z]*)\|(?<message>.*)/m
        Time_key timestamp
        Time_Format %Y-%m-%d %H:%M:%S.%L
        Time_Offset -0600
        Time_Keep On

  fluent-bit.conf: |
    [SERVICE]
        Daemon Off
        Flush 1
        Log_Level debug
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
        Health_Check On

    [INPUT]
        Name tail
        Log_Level error
        multiline.parser multiline-rules
        Path /app/logs/*.log
        Tag logs

    [FILTER]
        Name             parser
        Match            *
        key_name         log
        parser           named-captures

    [FILTER]
        Name             modify
        Match            *
        Add service ${SERVICE_NAME}
        
    [FILTER]
        Name             modify
        Match            *
        Add testtag fluentbit

    [OUTPUT]
        Name stdout
        Log_Level trace
        Match *

    [OUTPUT]
        Name opentelemetry
        Match *
        Log_Level trace
        Host ingest.privateotel.com
        Port 443
        Header token ***************
        Log_response_payload True
        Tls                  On
        Tls.verify           Off
        add_label            app local

  fluent-bit.conf: |
    [SERVICE]
        Daemon Off
        Flush 1
        Log_Level debug
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
        Health_Check On

    [INPUT]
        Name tail
        Log_Level error
        multiline.parser multiline-rules
        Path /app/logs/*.log
        Tag logs

    [FILTER]
        Name             parser
        Match            *
        key_name         log
        parser           named-captures

    [FILTER]
        Name             modify
        Match            *
        Add service ${SERVICE_NAME}
        
    [FILTER]
        Name             modify
        Match            *
        Add testtag fluentbit

    [OUTPUT]
        Name stdout
        Log_Level trace
        Match *

    [OUTPUT]
        Name opentelemetry
        Match *
        Log_Level trace
        Host ingest.privateotel.com
        Port 443
        Header token ***************
        Log_response_payload True
        Tls                  On
        Tls.verify           Off
        add_label            app local

My opentelemetry endpoint receives the request formatted as such:


 {
    "body": {
      "clientIp": "111.222.888",
      "loglevel": "INFO",
      "message": "This is a test log message for abc application",
      "service": "helloworld",
      "span_id": "9c1544cc4f7ff369",
      "testtag": "fluentbit",
      "timestamp": "2024-01-08 18:37:08.150",
      "trace_id": "7ada6c95a1bd243fa9013cab515173a9"
    },
    "instrumentation.name": "",
    "instrumentation.version": "",
    "observed_timestamp": 0,
    "severity_text": "",
    "span_id": "",
    "trace_id": ""
  }

As you can see above every named pair gets nested under the body. The body should simply contain the log message and all the other fields I choose to send should be nested under "fields". The proper format would looks something like what i have below.

Expected behavior:

 {
    "body": {
      "message": "This is a test log message for abc application",
    },
    "clientIp": "111.222.888",
    "loglevel": "INFO",
    "service": "helloworld",
    "instrumentation.name": "",
    "instrumentation.version": "",
    "observed_timestamp": 0,
    "testtag": "fluentbit",
    "severity_text": "",
    "span_id": "9c1544cc4f7ff369",
    "timestamp": "2024-01-08 18:37:08.150",
    "trace_id": "7ada6c95a1bd243fa9013cab515173a9"
  }

To Reproduce
Use a configuration similiar to mine. Generate a log line, extract some fields from that log line including traceid, spanid, timestamp, body, etc. Then use the opentelemetry output and point it to a opentelemetry endpoint.

Your Environment
I am running in a Kubernetes cluster using a daemonset and the configmap above.
The version on fluentbit I am using is 2.1.10.
The image is fluent/fluent-bit:2.1.10-debug

Additional context
Opentelemetry is unable to correctly process all the fields such as traceid and span id since they are not in the proper format schema
For more information on the opentelemetry logging data model and schema, please see below:

Otel logs data model docs

Otel logs data model github

Additional resource

Please fix the issue or if there is something I am doing incorrectly in my fluentbit config then please advise. Thank you.

@sudomateo
Copy link
Contributor

Thank you for opening this issue! I spent the last hour or so using different filter plugins (e.g., modify, record_modifier) to no avail. I then stumbled upon this issue and was glad to know it's not just me experiencing this. Subscribing for updates!

@jarmd
Copy link

jarmd commented Jan 9, 2024

We where experience the same behavior forwarding logs and metrics from FluentBit to Victoria Matrics and Logs, and VM/VL would not accept the data and discarded it !!

@edsiper
Copy link
Member

edsiper commented Jan 22, 2024

we are taking a look at this, thanks for raising the issue

@edsiper
Copy link
Member

edsiper commented Jan 22, 2024

I am triaging this, and I have a question about the position of the extra fields:

(copy pasting the expected behavior you posted above)

 {
    "body": {
      "message": "This is a test log message for abc application",
    },
    "clientIp": "111.222.888",
    "loglevel": "INFO",
    "service": "helloworld",
    "instrumentation.name": "",
    "instrumentation.version": "",
    "observed_timestamp": 0,
    "testtag": "fluentbit",
    "severity_text": "",
    "span_id": "9c1544cc4f7ff369",
    "timestamp": "2024-01-08 18:37:08.150",
    "trace_id": "7ada6c95a1bd243fa9013cab515173a9"
  }

isn't expected that the content be inside body or under attributes ?, the example shows the extra keys at the level of body

@edsiper
Copy link
Member

edsiper commented Jan 23, 2024

@cb645j @sudomateo @jarmd

@sudomateo
Copy link
Contributor

isn't expected that the content be inside body or under attributes ?, the example shows the extra keys at the level of body

Thank you for looking into this @edsiper! According to the example log records in the Log Data Model the log message itself should be under body and the attributes, with the exception of trace context fields and severity fields, should be under attributes.

The example given in the linked page is:

{
  "Timestamp": 1586960586000, // JSON needs to make a decision about
                              // how to represent nanoseconds.
  "Attributes": {
    "http.status_code": 500,
    "http.url": "http://example.com",
    "my.custom.application.tag": "hello",
  },
  "Resource": {
    "service.name": "donut_shop",
    "service.version": "semver:2.0.0",
    "k8s.pod.uid": "1138528c-c36e-11e9-a1a7-42010a800198",
  },
  "TraceId": "f4dbb3edd765f620", // this is a byte sequence
                                 // (hex-encoded in JSON)
  "SpanId": "43222c2d51a7abe3",
  "SeverityText": "INFO",
  "SeverityNumber": 9,
  "Body": "20200415T072306-0700 INFO I like donuts"
}

@edsiper
Copy link
Member

edsiper commented Jan 23, 2024

thanks @sudomateo

so my understanding by using the examples provided above is that we are compatible but some implementations don't allow structured content inside the body. In the log model there is also this example:

{
  "Timestamp": 1586960586000,
  ...
  "Body": {
    "i": "am",
    "an": "event",
    "of": {
      "some": "complexity"
    }
  }
}

@sudomateo
Copy link
Contributor

thanks @sudomateo

so my understanding by using the examples provided above is that we are compatible but some implementations don't allow structured content inside the body. In the log model there is also this example:

{
  "Timestamp": 1586960586000,
  ...
  "Body": {
    "i": "am",
    "an": "event",
    "of": {
      "some": "complexity"
    }
  }
}

This is my understanding as well. There was some discussion about this in the past at open-telemetry/opentelemetry-specification#1613. The consensus there seems to favor the use of attributes for things that "decorate" the log. However, a structured log message could be emitted via body.

From my perspective there seems to be 2 aspects to this.

  1. The OpenTelemetry implementation that's responsible for ingesting a log record should support the case where body contains non-scalar types like arrays and objects.
  2. Fluent Bit's opentelemetry output plugin should provide a mechanism to populate fields other than body in the log data model (e.g., attributes, resource, etc.).

I believe the original ask of this issue is to support the 2nd point.

@cb645j
Copy link
Contributor Author

cb645j commented Jan 23, 2024

@edsiper @sudomateo

so my understanding is similar to @sudomateo. There are predefined attributes (such as spand_id, trace_id, SeverityText, etc) that should be at the SAME level as body (not within body). Body should just be the body of the log record, aka the message. Body is of type "any" so it can be a simple string or a map, but in either case it should just contain the log message.

Attributes is a map and for adding "custom" fields that are not defined in the data model. Resource is also a map and would contain info about the source of the log however fluentbit otel output plugin does not seem to yet support adding "resource" field.

This is the best document ive found on log data model is here logs-data-model. All frontend applications I have sent logs too (using flentbit) expect the logs to be structured as defined on that page. The example @sudomateo gave above would be a valid example. The only thing I would add to his example is that the body can be one of the 2 ways below, both are technically valid according to otel.

  "Body": "I like donuts"
    "Body": {
      "message": "I like donuts",
    }

I would also like to note that it seems the fluenbit otel output plugin supports the ability to add attributes, however whenever i tried to define some, it seemed they did not get added nor sent. For example, if i added the following configuration:

[OUTPUT]
    Name opentelemetry
    Match *
    Host ingest.privateotel.com
    Port 443
    add_label            status_code 500

Then I would expect my "payload" to contain an attributes section containing "status_code" : "500", however this did not happen. But maybe the add_label field is not intended for this? If not, then im not sure what add_label is for? Also, if not, then I would suggest adding the ability to add attributes.

@sudomateo
Copy link
Contributor

I would also like to note that it seems the fluenbit otel output plugin supports the ability to add attributes, however whenever i tried to define some, it seemed they did not get added nor sent. For example, if i added the following configuration:

[OUTPUT]
    Name opentelemetry
    Match *
    Host ingest.privateotel.com
    Port 443
    add_label            status_code 500

Then I would expect my "payload" to contain an attributes section containing "status_code" : "500", however this did not happen. But maybe the add_label field is not intended for this? If not, then im not sure what add_label is for? Also, if not, then I would suggest adding the ability to add attributes.

@cb645j from what I understand the existing add_label configuration for the opentelemetry output is only for metrics, not logs. The Fluent Bit docs state the following:

This allows you to add custom labels to all metrics exposed through the OpenTelemetry exporter. You may have multiple of these fields

I'm not sure how metrics and logs are differentiated internally in Fluent Bit, but I just wanted to note that I also tried add_label to no avail before finding this issue.

@cb645j
Copy link
Contributor Author

cb645j commented Jan 23, 2024

@cb645j from what I understand the existing add_label configuration for the opentelemetry output is only for metrics, not logs. The Fluent Bit docs state the following:

I agree. Therefore, it would be nice if fluentbit added support for adding attributes. However, in my opinion this is lower priority. Fixing the overall structure is the main priority related to this issue.

@cb645j
Copy link
Contributor Author

cb645j commented Jan 30, 2024

@edsiper any update here?

@edsiper
Copy link
Member

edsiper commented Feb 1, 2024

There are a couple of use cases, I am looking to get your feedback and confirm expectations:

Inside Fluent Bit, all messages are structured, something that started as a hello world will be internally represented as {"log": "hello world"}. Most users use a parser to create a structure.

  • if only one key exists in the record, should be that key the only set in Otel field as part of body ?
  • if more than one key exists, which key should stay inside body and which one go to attributes ?

would make sense to pre-define certain keys that can be part of the body ?, e.g: if the record contains the keys log or message, by default they will set inside body. thoughts?

note: putting attributes handling in a side for a moment to brainstorm in the logic to handle the expected output structure

@sudomateo
Copy link
Contributor

Inside Fluent Bit, all messages are structured, something that started as a hello world will be internally represented as {"log": "hello world"}. Most users use a parser to create a structure.

Fluent Bit v2.1.0 and later updated the event format to add support for metadata.

[[TIMESTAMP, METADATA], MESSAGE]

Depending on what this metadata is meant to be used for, I can see a use case where the metadata is used to hold event attributes.

would make sense to pre-define certain keys that can be part of the body ?, e.g: if the record contains the keys log or message, by default they will set inside body. thoughts?

There's precedent for this. For example, the datadog output plugin uses the dd_message_key configuration to set which key in the event should map back to Datadog's message field, analogous to OpenTelemetry's body field. The rest of the keys on the event are treated as fields in Datadog. I can see a similar configuration option being exposed in the opentelemetry output plugin.

* if only one key exists in the record, should be that key the only set in Otel field as part of `body` ?

I can see a case for both sides here. If the event only contains one key, then it's easy to want to assume that key must contain the event body. However, how would such an event be distinguished from a structured event of only one key? This is where I start to lean towards some new configuration option (i.e., otel_body_key) that, when set, tells Fluent Bit to use that key for the body and use all other keys as attributes. Otherwise, when unset, tells Fluent Bit to send the entire event inside the body as a structured event.

* if more than one key exists, which key should stay inside `body` and which one go to `attributes` ?

Similar concerns as above. How does one differentiate between a structured event that should completely go inside the body and a mixed event where one key should be the body and the other keys should be the attributes? Again, I think exposing some configuration options here could be the way forward (i.e., otel_body_key).

This is just my opinion. Curious to hear what others think.

@edsiper
Copy link
Member

edsiper commented Feb 2, 2024

@sudomateo thanks for your feedback. I came up with something similar, I am working in a POC of this logic, I will keep you posted.

@cb645j
Copy link
Contributor Author

cb645j commented Feb 5, 2024

I believe the most important thing to start with is getting the span_id, trace_id, and other predefined fields in the correct structure (outside of body).

Too me, when there is more than one key, the key that goes in the body is the key that represents the log message (maybe let user define in settings what that key value is). Then, keys that are timestamp, SpanId, TraceId, SeverityText, severitynumber would all be their own element. Then, the remaining keys would go inside attributes.

@cb645j
Copy link
Contributor Author

cb645j commented Feb 5, 2024

I can see a case for both sides here. If the event only contains one key, then it's easy to want to assume that key must contain the event body. However, how would such an event be distinguished from a structured event of only one key? This is where I start to lean towards some new configuration option (i.e., otel_body_key) that, when set, tells Fluent Bit to use that key for the body and use all other keys as attributes. Otherwise, when unset, tells Fluent Bit to send the entire event inside the body as a structured event.

@sudomateo yes, except if the key is something like span-id or trace-id, or etc. Those should NOT go in attributes, they should be their own element (at the same level as body, attributes).

@edsiper
Copy link
Member

edsiper commented Feb 6, 2024

some updates:

  • I have a new branch with new support to define the body key or multiple options.
  • Working now on leaving remaining fields as attributes

I will ping you here once I push the branch so you can give it a try

@cb645j
Copy link
Contributor Author

cb645j commented Feb 6, 2024

some updates:

  • I have a new branch with new support to define the body key or multiple options.
  • Working now on leaving remaining fields as attributes

I will ping you here once I push the branch so you can give it a try

Thanks, please share the branch once pushed. Also, can you confirm that you are setting the trace-id and span-id as separate root fields NOT nested within body or attributes??

I.e.

{
  "Attributes": {
      "status_code": 500,
      "url": "http://example.com"
  },
  "TraceId": "f4dbb3edd765f620", 
  "SpanId": "43222c2d51a7abe3",
  "Body": "I like donuts"
}

@sudomateo
Copy link
Contributor

Thanks for working on this @edsiper! Let us know when you have something ready for us to test.

To echo @cb645j, I agree that Fluent Bit should populate and/or expose a mechanism to populate all the top-level fields in the Logs Data Model.

Field Name Description
Timestamp Time when the event occurred.
ObservedTimestamp Time when the event was observed.
TraceId Request trace id.
SpanId Request span id.
TraceFlags W3C trace flag.
SeverityText The severity text (also known as log level).
SeverityNumber Numerical value of the severity.
Body The body of the log record.
Resource Describes the source of the log.
InstrumentationScope Describes the scope that emitted the log.
Attributes Additional information about the event.

Not just TracedId and SpanId, but SeverityText, SeverityNumber, etc. Then, the Body can be populated with the user's preferred key and the Attributes can be populated with the remaining keys.

edsiper added a commit that referenced this issue Feb 16, 2024
…ix #8359)

The following patch fix and enhance the OpenTelemetry output connector when
handling log records.

In Fluent Bit world, we deal with ton of unstructured log records which comes
from variety of sources, or just simply raw text files. When converting those
lines to structured messages, there was no option to define what will be the
log body and log attributes and everything is packaged inside log body by
default.

This patch enhance the previous behavior by allowing the following:

- log body: optionally define multiple record accessor patterns that tries
            to match a key or sub-key from the record structure.
            For the first matched key, it's value is used as part of
            the body content.

            If no matches exists, the whole record is set inside the body.

- log attributes: if the log record contains native metadata, the keys
                  are added as OpenTelemetry Attributes.

                  if the log body was populated by using a record accessor
                  pattern as described above, the remaining keys that were
                  not used are added as attributes.

To achieve the desired new behavior, the configuration needs to use the new
configuration property called 'logs_body_key', which can be used to define
multiple record accessor patterns. e.g:

  pipeline:
    inputs:
      - name: dummy
        metadata: '{"meta": "data"}'
        dummy: '{"name": "bill", "lastname": "gates", "log": {"abc": {"def":123}, "saveme": true}}'

    outputs:
      - name: opentelemetry
        match: '*'
        host: localhost
        port: 4318
        logs_body_key: $name
        logs_body_key: $log['abc']['def']

In the example above, the dummy input plugin will generate a record with
metadata, in the output side, the plugin will lookup in order for $name and
then $log['abc']['def']. $name will match so 'bill' will become the value of
the body and the remaining content as attributes. Here is the output of the
vanilla Otel Collector when inspecting the content it receives:

  Body: Str(bill)
  Attributes:
       -> meta: Str(data)
       -> lastname: Str(gates)
       -> log: Map({"abc":{"def":123},"saveme":true})

Signed-off-by: Eduardo Silva <eduardo@calyptia.com>
@edsiper
Copy link
Member

edsiper commented Feb 16, 2024

first PR is here: #8491

Please help to review the behavior based on the "features it adds".

@nokute78
Copy link
Collaborator

nokute78 commented Feb 17, 2024

I also sent another PRs. They are to forward OTLP.

@sudomateo
Copy link
Contributor

I tested @edsiper's changes and wrote up my notes in #8491 (comment).

@cb645j
Copy link
Contributor Author

cb645j commented Feb 26, 2024

I did not get a chance to test but i reviewed @sudomateo notes and results and pretty much agree with those. I see the PR has been merged. When will it be available in a version?

@cb645j
Copy link
Contributor Author

cb645j commented Feb 26, 2024

@sudomateo how did you go about compiling and creating a new image? how did you compile? which dockerfile did you use?

@sudomateo
Copy link
Contributor

@sudomateo how did you go about compiling and creating a new image? how did you compile? which dockerfile did you use?

I followed the Build from Scratch instructions.

cd build
cmake ..
make

Then you'll have bin/fluent-bit to use as you see fit. I did have to brew install a few things on my macOS machine (i.e., brew install pkg-config) but on Linux I already had what I needed to build.

@cb645j
Copy link
Contributor Author

cb645j commented Mar 4, 2024

@edsiper which release/image will this be part of?

@cb645j
Copy link
Contributor Author

cb645j commented Mar 4, 2024

@sudomateo I got pass the cmake.. step but then when i do make i get

$ make
make: *** No targets specified and no makefile found. Stop.

I dont see a make file in /build ??

@sudomateo
Copy link
Contributor

After the cmake .. step, build should be populated like so.

> ls -la
total 10936
drwxr-xr-x@  20 sudomateo  staff      640 Mar  4 16:06 ./
drwxr-xr-x@  53 sudomateo  staff     1696 Feb 17 14:15 ../
-rw-r--r--@   1 sudomateo  staff        0 Feb 17 14:15 .empty
-rw-r--r--@   1 sudomateo  staff    77703 Mar  4 16:06 CMakeCache.txt
drwxr-xr-x@  18 sudomateo  staff      576 Mar  4 16:06 CMakeFiles/
-rw-r--r--@   1 sudomateo  staff     5406 Mar  4 16:06 CPackConfig.cmake
-rw-r--r--@   1 sudomateo  staff     5868 Mar  4 16:06 CPackSourceConfig.cmake
-rw-r--r--@   1 sudomateo  staff   101640 Mar  4 16:06 Makefile
drwxr-xr-x@   4 sudomateo  staff      128 Mar  4 16:06 backtrace-prefix/
drwxr-xr-x@   2 sudomateo  staff       64 Mar  4 16:06 bin/
drwxr-xr-x@   3 sudomateo  staff       96 Mar  4 16:06 certs/
-rw-r--r--@   1 sudomateo  staff     2804 Mar  4 16:06 cmake_install.cmake
-rw-r--r--@   1 sudomateo  staff  4452022 Mar  4 16:06 compile_commands.json
drwxr-xr-x@   7 sudomateo  staff      224 Mar  4 16:06 examples/
drwxr-xr-x@   5 sudomateo  staff      160 Mar  4 16:06 include/
drwxr-xr-x@  20 sudomateo  staff      640 Mar  4 16:06 lib/
drwxr-xr-x@   2 sudomateo  staff       64 Mar  4 16:06 library/
drwxr-xr-x@ 108 sudomateo  staff     3456 Mar  4 16:06 plugins/
drwxr-xr-x@  12 sudomateo  staff      384 Mar  4 16:06 src/
drwxr-xr-x@   3 sudomateo  staff       96 Mar  4 16:06 tools/

Seems like your cmake .. did not fully complete correctly. What's in your build directory?

@cb645j
Copy link
Contributor Author

cb645j commented Mar 4, 2024

@sudomateo thanks for your response. Its possible cmake didnt complete but it appeared like it did... I dont see a make file in your /build either??

This is the contents of my /build

$ ls -la
total 516
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:10 ./
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:14 ../
drwxr-xr-x 1 cb645j 1049089      0 Jan 10 13:27 .cmake/
-rw-r--r-- 1 cb645j 1049089      0 Jan  9 14:02 .empty
-rw-r--r-- 1 cb645j 1049089  24411 Jan 10 16:01 ALL_BUILD.vcxproj
-rw-r--r-- 1 cb645j 1049089    293 Jan 10 16:01 ALL_BUILD.vcxproj.filters
-rw-r--r-- 1 cb645j 1049089  74732 Jan 10 16:01 CMakeCache.txt
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:10 CMakeFiles/
-rw-r--r-- 1 cb645j 1049089   4846 Mar  4 16:10 CPackConfig.cmake
-rw-r--r-- 1 cb645j 1049089   5299 Mar  4 16:10 CPackSourceConfig.cmake
-rw-r--r-- 1 cb645j 1049089   6448 Jan 10 16:01 INSTALL.vcxproj
-rw-r--r-- 1 cb645j 1049089    535 Jan 10 16:01 INSTALL.vcxproj.filters
-rw-r--r-- 1 cb645j 1049089  10351 Jan 10 16:01 LICENSE.txt
-rw-r--r-- 1 cb645j 1049089   6683 Jan 10 16:01 PACKAGE.vcxproj
-rw-r--r-- 1 cb645j 1049089    535 Jan 10 16:01 PACKAGE.vcxproj.filters
-rw-r--r-- 1 cb645j 1049089   6454 Jan 10 16:01 WIX.template.in
-rw-r--r-- 1 cb645j 1049089  89313 Jan 12 13:58 ZERO_CHECK.vcxproj
-rw-r--r-- 1 cb645j 1049089    536 Jan 10 16:01 ZERO_CHECK.vcxproj.filters
-rw-r--r-- 1 cb645j 1049089   2393 Jan 10 16:01 cmake_install.cmake
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:10 examples/
-rw-r--r-- 1 cb645j 1049089 109613 Jan 10 16:02 fluent-bit.sln
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:10 include/
drwxr-xr-x 1 cb645j 1049089      0 Jan 10 16:01 lib/
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:10 plugins/
drwxr-xr-x 1 cb645j 1049089      0 Mar  4 16:10 src/
drwxr-xr-x 1 cb645j 1049089      0 Jan 10 16:00 tools/

@cb645j
Copy link
Contributor Author

cb645j commented Mar 4, 2024

@sudomateo my cmake ends with:

-- Generating done (8.2s)
-- Build files have been written to: C:/Users/cb645j/Repositories/fluentbit/build

so i assumed it completed successfully....?? The only things i see in to that look alarming are these


-- Could NOT find PostgreSQL (missing: PostgreSQL_LIBRARY PostgreSQL_INCLUDE_DIR)
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- Specifying YY_NO_UNISTD_H
-- Specifying YY_NO_UNISTD_H
CPack warning: both CPACK_COMPONENTS_ALL and CPACK_MONOLITHIC_INSTALL have been set.
Defaulting to a monolithic installation.
-- Configuring done (11.2s)
CMake Warning (dev) at lib/monkey/mk_core/deps/libevent/CMakeLists.txt:803 (add_library):
  Policy CMP0115 is not set: Source file extensions must be explicit.  Run
  "cmake --help-policy CMP0115" for policy details.  Use the cmake_policy
  command to set the policy and suppress this warning.


-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
-- Could NOT find Journald (missing: JOURNALD_LIBRARY JOURNALD_INCLUDE_DIR)
-- Could NOT find LibEdit (missing: libedit_INCLUDE_DIRS libedit_LIBRARIES)

@sudomateo
Copy link
Contributor

That is weird. I was building on a macOS machine with an Apple Silicon chip. I got errors when I didn't have pkg-config, perhaps that's the issue for you?

@cb645j
Copy link
Contributor Author

cb645j commented Mar 5, 2024

That is weird. I was building on a macOS machine with an Apple Silicon chip. I got errors when I didn't have pkg-config, perhaps that's the issue for you?

Possibly, let me try installing. its just odd that the cmake completes without errors. It does look like im missing a couple of directories that you have. what directory are the make files in? im assuming these get generated or download when you run cmake

@cb645j
Copy link
Contributor Author

cb645j commented Mar 5, 2024

@edsiper when will this be available in a release? I.e. when will a new image containing this be available???

1 similar comment
@cb645j
Copy link
Contributor Author

cb645j commented Mar 7, 2024

@edsiper when will this be available in a release? I.e. when will a new image containing this be available???

@kevarr
Copy link

kevarr commented Mar 14, 2024

A little late to the party here, but wanted to say very nice work on this. I've been using the Fluent output and using Otel's fluent forward receiver which places all of the log records under the "resources" key. This is AFAICT an undocumented feature of the receiver and it also requires complex restructuring pipelines to get all of the attributes/resources in the right places.

To build off of this capability it would be awesome to have support for automatically parsing records supplied by the Kubernetes Filter into resource attributes that match those provided by OpenTelemtry's k8sattributes processor.

@cb645j
Copy link
Contributor Author

cb645j commented Mar 14, 2024

@kevarr Thank you. Yes, i experienced with the approach you described however I want to be able to streamline and simplify the process but just being able to use fluentbit (without a collector) to export in otel format. It seems a lot of people are interested in this. Its unfortunate that the fluentit otel_ouput does not work properly and support all of the otel fields and its very difficult to get the owners/contributors of fluentbit to make changes and respond.

I have a follow up ticket here for the things that are still missing #8552

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants