Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need to Configure AWS for Fluent Bit to Send Raw Logs to S3 for Spark History Server #1156

Open
vinodkumar-wagh opened this issue Sep 25, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@vinodkumar-wagh
Copy link

Description:
We are using the "aws-for-fluent-bit" application to send logs from an Amazon EKS cluster to an S3 bucket. Our current setup adds Kubernetes metadata (e.g., pod name, namespace, etc.) and additional fields to the logs, which is suitable for most use cases. However, we have a requirement to send logs in their raw format (as generated by the application) for a spark history server, without any added metadata or fields.

The challenge we are facing is twofold:

  1. We need to send raw logs (without enrichment) to the S3 bucket for one application.
  2. For all other applications, we want to continue sending logs with the current enriched format, including Kubernetes metadata.

We are seeking guidance on how to configure the "aws-for-fluent-bit" application to:
• Process logs as raw data and send them to S3.
• Continue sending enriched logs for all other applications.

Request:
• How can we configure Fluent Bit to handle logs differently based on the Kubernetes app, ensuring raw logs for one app and enriched logs for others?
• Are there best practices or specific configuration recommendations for this use case?

@vinodkumar-wagh vinodkumar-wagh added the enhancement New feature or request label Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant