Skip to content

Commit

Permalink
update kafka readme (#18607)
Browse files Browse the repository at this point in the history
* update kafka readme

Remove the note about DSM now that we display the product connection as a UI component in app

* update kafka_consumer readme

Remove the note about DSM now that we display the product connection as a UI component in app

* remove trailing link

* remove trailing link
  • Loading branch information
eho1307 committed Sep 19, 2024
1 parent 6faa14c commit ddf0510
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 6 deletions.
3 changes: 0 additions & 3 deletions kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@

View Kafka broker metrics collected for a 360-view of the health and performance of your Kafka clusters in real time. With this integration, you can collect metrics and logs from your Kafka deployment to visualize telemetry and alert on the performance of your Kafka stack.

If you would benefit from visualizing the topology of your streaming data pipelines and identifying the root cause of bottlenecks, learn more about [Data Streams Monitoring][24].

**Note**:
- This check has a limit of 350 metrics per instance. The number of returned metrics is indicated in the Agent status output. Specify the metrics you are interested in by editing the configuration below. For more detailed instructions on customizing the metrics to collect, see the [JMX Checks documentation][2].
- This integration attached sample configuration works only for Kafka >= 0.8.2.
Expand Down Expand Up @@ -175,5 +173,4 @@ See [service_checks.json][15] for a list of service checks provided by this inte
[21]: https://www.datadoghq.com/blog/monitor-kafka-with-datadog
[22]: https://raw.githubusercontent.com/DataDog/dd-agent/5.2.1/conf.d/kafka.yaml.example
[23]: https://www.datadoghq.com/knowledge-center/apache-kafka/
[24]: https://www.datadoghq.com/product/data-streams-monitoring/
[25]: https://app.datadoghq.com/data-streams
3 changes: 0 additions & 3 deletions kafka_consumer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@

This Agent integration collects message offset metrics from your Kafka consumers. This check fetches the highwater offsets from the Kafka brokers, consumer offsets that are stored in Kafka (or Zookeeper for old-style consumers), and then calculates consumer lag (which is the difference between the broker offset and the consumer offset).

If you would benefit from visualizing the topology of your streaming data pipelines and identifying the root cause of bottlenecks, learn more about [Data Streams Monitoring][16].

**Note:**
- This integration ensures that consumer offsets are checked before broker offsets; in the worst case, consumer lag may be a little overstated. Checking these offsets in the reverse order can understate consumer lag to the point of having negative values, which is a dire scenario usually indicating messages are being skipped.
- If you want to collect JMX metrics from your Kafka brokers or Java-based consumers/producers, see the [Kafka Broker integration][19].
Expand Down Expand Up @@ -143,7 +141,6 @@ sudo service datadog-agent restart
[13]: https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics
[14]: https://www.datadoghq.com/blog/collecting-kafka-performance-metrics
[15]: https://www.datadoghq.com/blog/monitor-kafka-with-datadog
[16]: https://www.datadoghq.com/product/data-streams-monitoring/
[17]: https://docs.datadoghq.com/containers/kubernetes/integrations/
[18]: https://app.datadoghq.com/data-streams
[19]: https://app.datadoghq.com/integrations/kafka?search=kafka

0 comments on commit ddf0510

Please sign in to comment.