Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements to kafka checks #276

Merged
merged 1 commit into from
May 24, 2021

Conversation

smalleni
Copy link
Collaborator

  • Update backend details to the doc sent to ES
  • Close kafka consumer
  • Increase the end_time used to get the kafka offset
    This is because, unlike ES, the timestamp is the one created
    by Kafka when the log message is recorded in Kafka. For example,
    in the case of ES, the timestamp added by FluenD becomes the timestamp
    field in ES, so we can query accunrately for log messages between the
    start_time and end_time of the log generation test. However, in the case
    of Kafka, we cannot accurately check for the log messages generated during
    the test because the records in Kafka and their corresponding timstamps for
    the offsets could significantly after the end_time of the test as the messages
    slowly show up in the kafka cluster and there is no way to use the timestamp from
    FluentD as the timstamp for the kafka offset and the FluentD timestamp is actually
    in the content of the Kafka record. We currently check for all messages from the
    start_time and before the timeout configured in the CR.

Signed-off-by: Sai Sindhur Malleni smalleni@redhat.com

Description

Fixes

@smalleni smalleni force-pushed the kafka_improvements branch 2 times, most recently from 40c5db1 to 212c8e9 Compare May 24, 2021 14:51
@smalleni smalleni requested a review from dry923 May 24, 2021 14:51
Copy link
Member

@dry923 dry923 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@dry923 dry923 added the ok to test Kick off our CI framework label May 24, 2021
@dry923
Copy link
Member

dry923 commented May 24, 2021

/rerun all

- Update backend details to the doc sent to ES
- Close kafka consumer
- Increase the `end_time` used to get the kafka offset
  This is because, unlike ES, the timestamp is the one created
  by Kafka when the log message is recorded in Kafka. For example,
  in the case of ES, the timestamp added by FluenD becomes the timestamp
  field in ES, so we can query accunrately for log messages between the
  `start_time` and `end_time` of the log generation test. However, in the case
  of Kafka, we cannot accurately check for the log messages generated during
  the test because the records in Kafka and their corresponding timstamps for
  the offsets could be well after the `end_time` of the test as the messages
  slowly show up in the kafka cluster and there is no way to use the timestamp from
  FluentD as the timstamp for the kafka offset. The FluentD timestamp is actually
  in the content of the Kafka record. We currently check for all messages from the
  `start_time` and before the `timeout` configured in the CR.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
@comet-perf-ci
Copy link
Collaborator

Results for SNAFU CI Test

Test Result Runtime
snafu/log_generator_wrapper PASS 00:03:58

@dry923 dry923 merged commit 24fa51a into cloud-bulldozer:master May 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ok to test Kick off our CI framework
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants