Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable SNAFU logging #273

Merged
merged 1 commit into from
May 20, 2021
Merged

Conversation

smalleni
Copy link
Collaborator

This is a corner case needed during our log testing.
To accurately count the number of synthetic log messages
generated by the log generator pod, we want to disable the regular SNAFU
logging. This will make it harder to debug, but this is only something
a user optionally sets in the benchmark-operator CR. The reason we want
to do this is, we plan to deploy only the log-generator pods to a separate
namespace and forward those logs to a separate kafka topic and then we can
reliably read off the offset in the topic without reading each message to
verify that all log messages generated by the log generator pod made it to kafka.

Signed-off-by: Sai Sindhur Malleni smalleni@redhat.com

Description

Fixes

This is a corner case needed during our log testing.
To accurately count the number of synthetic log messages
generated by the log generator pod, we want to disable the regular SNAFU
logging. This will make it harder to debug, but this is only something
a user optionally sets in the benchmark-operator CR. The reason we want
to do this is, we plan to deploy only the log-generator pods to a separate
namespace and forward those logs to a separate kafka topic and then we can
reliably read off the offset in the topic without reading each message to
verify that all log messages generated by the log generator pod made it to kafka.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
smalleni added a commit to smalleni/benchmark-operator that referenced this pull request May 19, 2021
This along with cloud-bulldozer/benchmark-wrapper#273
helps suppress any unneeded logging in the pod, so that we can accurately and easily count
the number of log emssages recevied in a backend like kafka merely by using the offsets.
The plan is to deploy the log generator pods in a separate namesapce and forward those logs
to a topic in kafka. That way we would be able to reliably count the messages received just
by looking at kafka topic offset. Otherwise there would be other logs from the log generator pods
as well as benchmark-operator pod that would make it hard to reliably count logs received just by
kafka offset.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
smalleni added a commit to smalleni/benchmark-operator that referenced this pull request May 19, 2021
This along with cloud-bulldozer/benchmark-wrapper#273
helps suppress any unneeded logging in the pod, so that we can accurately and easily count
the number of log emssages recevied in a backend like kafka merely by using the offsets.
The plan is to deploy the log generator pods in a separate namesapce and forward those logs
to a topic in kafka. That way we would be able to reliably count the messages received just
by looking at kafka topic offset. Otherwise there would be other logs from the log generator pods
as well as benchmark-operator pod that would make it hard to reliably count logs received just by
kafka offset.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
@smalleni smalleni requested a review from dry923 May 19, 2021 23:22
smalleni added a commit to smalleni/benchmark-operator that referenced this pull request May 19, 2021
This along with cloud-bulldozer/benchmark-wrapper#273
helps suppress any unneeded logging in the pod, so that we can accurately and easily count
the number of log emssages recevied in a backend like kafka merely by using the offsets.
The plan is to deploy the log generator pods in a separate namesapce and forward those logs
to a topic in kafka. That way we would be able to reliably count the messages received just
by looking at kafka topic offset. Otherwise there would be other logs from the log generator pods
as well as benchmark-operator pod that would make it hard to reliably count logs received just by
kafka offset.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
@dry923 dry923 added the ok to test Kick off our CI framework label May 20, 2021
@dry923
Copy link
Member

dry923 commented May 20, 2021

/rerun all

@comet-perf-ci
Copy link
Collaborator

Results for SNAFU CI Test

Test Result Runtime
snafu/log_generator_wrapper PASS 00:04:27

Copy link
Member

@dry923 dry923 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@dry923 dry923 merged commit 6cb5709 into cloud-bulldozer:master May 20, 2021
dry923 pushed a commit to cloud-bulldozer/benchmark-operator that referenced this pull request May 20, 2021
This along with cloud-bulldozer/benchmark-wrapper#273
helps suppress any unneeded logging in the pod, so that we can accurately and easily count
the number of log emssages recevied in a backend like kafka merely by using the offsets.
The plan is to deploy the log generator pods in a separate namesapce and forward those logs
to a topic in kafka. That way we would be able to reliably count the messages received just
by looking at kafka topic offset. Otherwise there would be other logs from the log generator pods
as well as benchmark-operator pod that would make it hard to reliably count logs received just by
kafka offset.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
ebattat pushed a commit to ebattat/benchmark-operator that referenced this pull request Jun 10, 2021
This along with cloud-bulldozer/benchmark-wrapper#273
helps suppress any unneeded logging in the pod, so that we can accurately and easily count
the number of log emssages recevied in a backend like kafka merely by using the offsets.
The plan is to deploy the log generator pods in a separate namesapce and forward those logs
to a topic in kafka. That way we would be able to reliably count the messages received just
by looking at kafka topic offset. Otherwise there would be other logs from the log generator pods
as well as benchmark-operator pod that would make it hard to reliably count logs received just by
kafka offset.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
amitsagtani97 pushed a commit to cloud-bulldozer/benchmark-operator that referenced this pull request Jun 11, 2021
* add hammerdb vm support CNV-6501 and pod support for mariadb and postgres

* add generic hammerdb cr

* add hammerdb vm example

* change hammerdb crds hirarchy according to database type

* fixes after review

* fix hammerdb mssql test

* revert sql server namespace

* revert transactions number

* update transactions number to 500k

* update transactions to 100000

* update transactions to 100000

* update transactions to 10000 for fast run

* fix hammer workload name

* add creator pod wait

* add debug true

* revert app label to hammerdb_workload

* fix type name

* temporary fix in common.sh

* revert my common.sh changes

* change db init to false

* change db init to true

* update changes to support operator-sdk version 1.5.0

* update changes to support operator-sdk version 1.5.0

* enlarge the timeout from 500 to 800

* increase timeout to 1000

* revert timout to 500s

* add pin and resources support

* add mssql 2019 image and creator pod

* revert it back to legacy mssql test

* add es custom fields support

* fix image example name

* add hammerdb vm support CNV-6501 and pod support for mariadb and postgres

* add generic hammerdb cr

* add hammerdb vm example

* change hammerdb crds hirarchy according to database type

* fixes after review

* fix hammerdb mssql test

* revert sql server namespace

* revert transactions number

* update transactions number to 500k

* update transactions to 100000

* update transactions to 100000

* update transactions to 10000 for fast run

* fix hammer workload name

* add creator pod wait

* add debug true

* revert app label to hammerdb_workload

* fix type name

* temporary fix in common.sh

* revert my common.sh changes

* change db init to false

* change db init to true

* update changes to support operator-sdk version 1.5.0

* enlarge the timeout from 500 to 800

* increase timeout to 1000

* revert timout to 500s

* add pin and resources support

* add mssql 2019 image and creator pod

* revert it back to legacy mssql test

* add es custom fields support

* fix image example name

* update changes to support operator-sdk version 1.5.0

* add latest changes

* update changes to support operator-sdk version 1.5.0

* fix operator-sdk version 1.5.0

* add os version

* fixes after changes

* add es_os_version

* update changes to support operator-sdk version 1.5.0

* fix hammer doc - database per the CR file

* fix hammedb doc

* fix hammedb doc

* add es_kind

* add es_kind to cr and fix merge conflict

* remove .idea

* update changes to support operator-sdk version 1.5.0

* adding cerberus validate certs parameter

Signed-off-by: Kedar Vijay Kulkarni <kkulkarni@redhat.com>

* Remove magzine section from CONTRIBUTING.md

Signed-off-by: Kedar Vijay Kulkarni <kkulkarni@redhat.com>

* Add support for kafka as log backend for verification

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>

* Expand README

Signed-off-by: Kedar Vijay Kulkarni <kkulkarni@redhat.com>

* Quiesce logging in pod for log generator workload

This along with cloud-bulldozer/benchmark-wrapper#273
helps suppress any unneeded logging in the pod, so that we can accurately and easily count
the number of log emssages recevied in a backend like kafka merely by using the offsets.
The plan is to deploy the log generator pods in a separate namesapce and forward those logs
to a topic in kafka. That way we would be able to reliably count the messages received just
by looking at kafka topic offset. Otherwise there would be other logs from the log generator pods
as well as benchmark-operator pod that would make it hard to reliably count logs received just by
kafka offset.

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>

* removed line breaks for trex tasks only

* mounting module path for mlnx

* updated doc for mlnx sriov policy

* Update installation.md

* Make sink verification optional for kafka

Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>

* Auto osd cache drop (#570)

only do ceph osd cache dropping if user requests it
default to openshift for benchmark-operator
add option to drop Ceph OSD cache to CR
document Ceph OSD cache dropping
user must start cache dropper and ceph toolbox pod
test both OSD cache dropping and kernel cache dropping at same time
only if openshift-storage namespace is defined

* replace preview by working in hammerdb doc

* update changes to support operator-sdk version 1.5.0

* replace preview by working in hammerdb doc

* remove stressng fixes

Co-authored-by: Kedar Vijay Kulkarni <kkulkarni@redhat.com>
Co-authored-by: Sai Sindhur Malleni <smalleni@redhat.com>
Co-authored-by: Murali Krishnasamy <mukrishn@redhat.com>
Co-authored-by: Ayesha Vijay Kumar <84931574+Ayesha279@users.noreply.github.com>
Co-authored-by: Ben England <bengland@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ok to test Kick off our CI framework
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants