Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update kafka readme #18607

Merged
merged 4 commits into from
Sep 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@

View Kafka broker metrics collected for a 360-view of the health and performance of your Kafka clusters in real time. With this integration, you can collect metrics and logs from your Kafka deployment to visualize telemetry and alert on the performance of your Kafka stack.

If you would benefit from visualizing the topology of your streaming data pipelines and identifying the root cause of bottlenecks, learn more about [Data Streams Monitoring][24].

**Note**:
- This check has a limit of 350 metrics per instance. The number of returned metrics is indicated in the Agent status output. Specify the metrics you are interested in by editing the configuration below. For more detailed instructions on customizing the metrics to collect, see the [JMX Checks documentation][2].
- This integration attached sample configuration works only for Kafka >= 0.8.2.
Expand Down Expand Up @@ -175,5 +173,4 @@ See [service_checks.json][15] for a list of service checks provided by this inte
[21]: https://www.datadoghq.com/blog/monitor-kafka-with-datadog
[22]: https://raw.githubusercontent.com/DataDog/dd-agent/5.2.1/conf.d/kafka.yaml.example
[23]: https://www.datadoghq.com/knowledge-center/apache-kafka/
[24]: https://www.datadoghq.com/product/data-streams-monitoring/
[25]: https://app.datadoghq.com/data-streams
3 changes: 0 additions & 3 deletions kafka_consumer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@

This Agent integration collects message offset metrics from your Kafka consumers. This check fetches the highwater offsets from the Kafka brokers, consumer offsets that are stored in Kafka (or Zookeeper for old-style consumers), and then calculates consumer lag (which is the difference between the broker offset and the consumer offset).

If you would benefit from visualizing the topology of your streaming data pipelines and identifying the root cause of bottlenecks, learn more about [Data Streams Monitoring][16].

**Note:**
- This integration ensures that consumer offsets are checked before broker offsets; in the worst case, consumer lag may be a little overstated. Checking these offsets in the reverse order can understate consumer lag to the point of having negative values, which is a dire scenario usually indicating messages are being skipped.
- If you want to collect JMX metrics from your Kafka brokers or Java-based consumers/producers, see the [Kafka Broker integration][19].
Expand Down Expand Up @@ -143,7 +141,6 @@ sudo service datadog-agent restart
[13]: https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics
[14]: https://www.datadoghq.com/blog/collecting-kafka-performance-metrics
[15]: https://www.datadoghq.com/blog/monitor-kafka-with-datadog
[16]: https://www.datadoghq.com/product/data-streams-monitoring/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this is being removed I wonder if the other numbers should be shifted down

[17]: https://docs.datadoghq.com/containers/kubernetes/integrations/
[18]: https://app.datadoghq.com/data-streams
[19]: https://app.datadoghq.com/integrations/kafka?search=kafka
Loading