-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Namespace annotations to prevent mapping collisions #854
Comments
The most common case would be some pods with: attached is a test case. Unpack it, and run: fluent-bit -v -v -c tc/fluent-bit.conf and you will see the error. In this case, my elastic server has previously observed a kubernetes.labels.app= and made that mapping 'text', and it now wants to insert an object. the solution I believe is to create an elastic index template https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html This error is also talked about in logstash, filebeat, and fluentd so its not unique here. |
I created a new Elasticsearch index template with Dots in fields names are discouraged and it doesn't seem possible to overload a key (app) with multiple values(text/object). |
I should add that another possibility would be to mutate or pre-process the field on ingestion with an Elastic pipeline but I imagine that would be very resource heavy for the ES cluster. |
The elasticsearch template may solve the mapping collision but it doesn't necessarily solve This is especially true with JSON logs where the app developers have total control. E.g. the key But as @donbowman pointed out in the slack channel sometimes keys have different names but actually do refer to the same thing e.g. For this to work we need to be able to rename keys and for it to work together with namespacing I think we also need the ability to mark specific keys as global. So maybe we need to decentralize some parts of the configuration and bring it closer to the apps. Let's say we have somehow defined that all log entries for apiVersion: fluentbit.io/v1alpha1
kind: KeyMap
mappings:
statusCode:
rename: code
global: true
...
selector:
matchLabels:
app: my-app |
Also to note, when this error occurs, the log message is not inserted to elastic. Worse, its added to a 'retry' queue in fluent-bit, so its quite expensive and achieving nothing. It retries many times. It should probably not retry fatal errors, only queue overflow or no-connect. |
If we set 'Replace_Dots On' in the elasticsearch output, the problem is 'resolved'. Does anything have an issue with this? Its not the best (the label in elastic doesn't match the label in kubernetes), but i'm not sure what else one could do.
|
Thanks Don. Looks like a reasonable workaround. |
I was being too picky and optimistic wrt label clashes. |
This addresses fluent/fluent-bit#854. Make 'Replace_Dots' settable for fluent-bit elasticsearch output, and default to 'On'. Signed-off-by: Don Bowman <db@donbowman.ca>
this problem is still exists. |
I'm facing this as well, is there any valid workaround available? Thanks |
Same issue here, having error - {"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/component]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]."} Adding "Replace_Dots On" to elasticsearch output really did fix the issue. |
What will be the equivalent configuration for FluentD? Stuck in this since a while now. |
Signed-off-by: Takahiro Yamashita <nokute78@gmail.com>
I consistently get mapping collisions like the following
I would like to be able to namespace the parsed json logs by an kubernetes annotation.
Currently the json parser flattens the keys from the
log
like thisBut if we have the ability to specify a namespace as an annotation we could avoid indexing type collisions between apps.
E.g. with the following annotations
I would then get an output like this:
This would prevent mapping collisions and also allow me to create logical groupings based on e.g. the type of json logging library i'm using.
The text was updated successfully, but these errors were encountered: