Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uperf benchmark fails: Unable to run uperf as benchmark #646

Closed
MuhammadMunir12 opened this issue Aug 18, 2021 · 34 comments
Closed

Uperf benchmark fails: Unable to run uperf as benchmark #646

MuhammadMunir12 opened this issue Aug 18, 2021 · 34 comments

Comments

@MuhammadMunir12
Copy link

Hi,

I am trying to run the Uperf as benchmark using the following two CRs. First one create server only and gets stuck on creating client pod. Whereas, the second script fails to run it as benchmark.

CR 1

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf
  namespace: my-ripsaw
spec:
  workload:
    name: uperf
    args:
      hostnetwork: false
      serviceip: false
      pin: true
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      samples: 1
      pair: 1
      test_types:
        - rr
      protos:
        - tcp
      sizes:
        - 1024
      nthrs:
        - 1
      runtime: 60

CR2

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf-benchmark
  namespace: my-ripsaw
spec:
  workload:
    name: uperf
    args:
      client_resources:
        requests:
          cpu: 500m
          memory: 500Mi
        limits:
          cpu: 500m
          memory: 500Mi
      server_resources:
        requests:
          cpu: 500m
          memory: 500Mi
        limits:
          cpu: 500m
          memory: 500Mi
      serviceip: false
      runtime_class: false
      hostnetwork: false
      networkpolicy: false
      pin: true
      kind: pod
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      pair: 1
      multus:
        enabled: false
      samples: 1
      test_types:
        - stream
      protos:
        - tcp
      sizes:
        - 16384
      nthrs:
        - 1
      runtime: 30
      colocate: false
      density_range: [low, high]
      node_range: [low, high]
      step_size: addN, log2

I need some clarity on running it as a benchmark. Help from the community will be highly appreciated.

@jtaleric
Copy link
Member

Hi,

I am trying to run the Uperf as benchmark using the following two CRs. First one create server only and gets stuck on creating client pod. Whereas, the second script fails to run it as benchmark.

Can you please provide the log / description of the client pod to help us diagnose the CR1 issue.

CR 1

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf
  namespace: my-ripsaw
spec:
  workload:
    name: uperf
    args:
      hostnetwork: false
      serviceip: false
      pin: true
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      samples: 1
      pair: 1
      test_types:
        - rr
      protos:
        - tcp
      sizes:
        - 1024
      nthrs:
        - 1
      runtime: 60

CR2

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf-benchmark
  namespace: my-ripsaw
spec:
  workload:
    name: uperf
    args:
      client_resources:
        requests:
          cpu: 500m
          memory: 500Mi
        limits:
          cpu: 500m
          memory: 500Mi
      server_resources:
        requests:
          cpu: 500m
          memory: 500Mi
        limits:
          cpu: 500m
          memory: 500Mi
      serviceip: false
      runtime_class: false
      hostnetwork: false
      networkpolicy: false
      pin: true
      kind: pod
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      pair: 1
      multus:
        enabled: false
      samples: 1
      test_types:
        - stream
      protos:
        - tcp
      sizes:
        - 16384
      nthrs:
        - 1
      runtime: 30
      colocate: false
      density_range: [low, high]
      node_range: [low, high]
      step_size: addN, log2

I need some clarity on running it as a benchmark. Help from the community will be highly appreciated.

CR2 what are you attempting to do here?

@MuhammadMunir12
Copy link
Author

@jtaleric I am using these CRs as mentioned in the benchmark operator's workloads. The main difference between these two is the resource definition for server and client pods. CR2 is the actual CR given in docs at (https://github.com/cloud-bulldozer/benchmark-operator/blob/master/docs/uperf.md).

For CR1, server pod status is running, client pod doesn't come up in "oc get pods".
The output for "oc get benchmark" shows that clients are starting and it remains stuck here.

For CR2, the Uperf benchmark fails to run any pod (server and client).

Ideally, CR2 should work as given in the above link.

@jtaleric
Copy link
Member

Well you have pin set to true, so effectively CR1 and CR2 are the same -- minus the message size?

Can you please provide more detailed output from the operator to help diagnose this. If the client never came up, that tells me there is a configuration issue.

@MuhammadMunir12
Copy link
Author

MuhammadMunir12 commented Aug 20, 2021

@jtaleric The output stats are given below. These are obtained with CR1 file.

$ oc get po
NAME                                             READY   STATUS    RESTARTS   AGE
benchmark-operator-74ddd858c-rprkm               2/2     Running   0          6m1s
uperf-server-r192bc1.oss.labs-0-8712d0a5-2wpc6   1/1     Running   0          43s

Benchmark status

 $ oc get benchmark

NAME    TYPE    STATE              METADATA STATE   CERBERUS        SYSTEM METRICS   UUID                                   AGE
uperf   uperf   Starting Clients   not collected    not connected   Not collected    8712d0a5-db8a-56d9-941f-2199f54ae586   86s

Operator logs

$ oc describe benchmarks.ripsaw.cloudbulldozer.io
Name:         uperf
Namespace:    my-ripsaw
Labels:       <none>
Annotations:  <none>
API Version:  ripsaw.cloudbulldozer.io/v1alpha1
Kind:         Benchmark
Metadata:
  Creation Timestamp:  2021-08-20T07:47:44Z
  Generation:          1
  Managed Fields:
    API Version:  ripsaw.cloudbulldozer.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:metadata:
          .:
          f:collection:
          f:force:
          f:image:
          f:privileged:
          f:serviceaccount:
          f:ssl:
          f:stockpileSkipTags:
          f:stockpileTags:
          f:targeted:
        f:workload:
          .:
          f:args:
          f:name:
    Manager:      kubectl-create
    Operation:    Update
    Time:         2021-08-20T07:47:44Z
    API Version:  ripsaw.cloudbulldozer.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:message:
        f:system_metrics:
    Manager:      ansible-operator
    Operation:    Update
    Time:         2021-08-20T07:47:45Z
    API Version:  ripsaw.cloudbulldozer.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:cerberus:
        f:complete:
        f:metadata:
        f:node_hi_idx:
        f:node_idx:
        f:node_low_idx:
        f:pod_hi_idx:
        f:pod_idx:
        f:pod_low_idx:
        f:state:
        f:suuid:
        f:uuid:
    Manager:         OpenAPI-Generator
    Operation:       Update
    Time:            2021-08-20T07:48:16Z
  Resource Version:  14310450
  Self Link:         /apis/ripsaw.cloudbulldozer.io/v1alpha1/namespaces/my-ripsaw/benchmarks/uperf
  UID:               94918dd5-5594-4a9e-b69c-3ebdd5e60df7
Spec:
  Metadata:
    Collection:      false
    Force:           false
    Image:           quay.io/cloud-bulldozer/backpack:latest
    Privileged:      false
    Serviceaccount:  default
    Ssl:             false
    Stockpile Skip Tags:
    Stockpile Tags:
      common
      k8s
      openshift
    Targeted:  true
  Workload:
    Args:
      Hostnetwork:  false
      Nthrs:
        1
      Pair:        1
      Pin:         true
      pin_client:  r192bc1.oss.labs
      pin_server:  r192bc1.oss.labs
      Protos:
        tcp
      Runtime:    60
      Samples:    1
      Serviceip:  false
      Sizes:
        1024
      test_types:
        rr
    Name:  uperf
Status:
  Cerberus:        not connected
  Complete:        false
  Message:         None
  Metadata:        not collected
  node_hi_idx:     0
  node_idx:        0
  node_low_idx:    0
  pod_hi_idx:      0
  pod_idx:         0
  pod_low_idx:     0
  State:           Starting Clients
  Suuid:           8712d0a5
  system_metrics:  Not collected
  Uuid:            8712d0a5-db8a-56d9-941f-2199f54ae586
Events:            <none>

Kindly let me know if you need more details, I'll grab those for you.

@jtaleric
Copy link
Member

jtaleric commented Aug 20, 2021

Describing isn't the log. Please capture the ansible log.

@HughNhan
Copy link
Collaborator

HughNhan commented Aug 23, 2021

For CR2, you need to have a valid "runtime_class" If you don't have one, just do not have this variable in the CR2. Or, pod will not start.

For CR1, can you capture "oc logs [your-operator-pod] -c manager"

@MuhammadMunir12
Copy link
Author

@HughNhan & @jtaleric
From CR2, I've removed the runtime_class and now CR2 and CR1 are behaving the same. The server pod comes up in a running state whereas, client pods are not getting scheduled. From benchmark output, it shows clients are starting.

Benchmark status

$ oc get benchmark
NAME              TYPE    STATE              METADATA STATE   CERBERUS        SYSTEM METRICS   UUID                                   AGE
uperf-benchmark   uperf   Starting Clients   not collected    not connected   Not collected    400e4278-0d1d-5840-8486-a8cbd2ba5ee4   8m31s

Pods status

$ oc get po -o wide 
NAME                                             READY   STATUS    RESTARTS   AGE     IP             NODE               NOMINATED NODE   READINESS GATES
benchmark-operator-74ddd858c-kvsx2               2/2     Running   0          163m    10.129.3.25    r192bmw.oss.labs   <none>           <none>
uperf-server-r192bc1.oss.labs-0-400e4278-csm7r   1/1     Running   0          8m58s   10.128.3.231   r192bc1.oss.labs   <none>           <none>

Sriov Operator pods list

 $ oc get pods -n openshift-sriov-network-operator
NAME                                      READY   STATUS    RESTARTS   AGE
network-resources-injector-fcknd          1/1     Running   0          12d
network-resources-injector-sfm9r          1/1     Running   0          12d
network-resources-injector-vfczv          1/1     Running   0          12d
operator-webhook-bzv2f                    1/1     Running   0          12d
operator-webhook-hdkqb                    1/1     Running   0          12d
operator-webhook-m47vm                    1/1     Running   0          12d
sriov-cni-5hgd4                           2/2     Running   0          159m
sriov-cni-5xslb                           2/2     Running   0          159m
sriov-device-plugin-mr5vg                 1/1     Running   0          158m
sriov-device-plugin-nwmqs                 1/1     Running   0          158m
sriov-network-config-daemon-4mm5l         1/1     Running   0          12d
sriov-network-config-daemon-6lf6c         1/1     Running   0          12d
sriov-network-config-daemon-bq5lh         1/1     Running   0          12d
sriov-network-operator-6bf9ccff5c-tcn5h   1/1     Running   0          12d

Gathering logs for operator's pod (sriov-network-operator-6bf9ccff5c-tcn5h).

$ oc logs sriov-network-operator-6bf9ccff5c-tcn5h -n openshift-sriov-network-operator -c manager
error: container manager is not valid for pod sriov-network-operator-6bf9ccff5c-tcn5h

Without container manager argument the logs are obtained as follows.

{"level":"info","ts":1629638038.9743352,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629638038.974446,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629638038.9744732,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629638098.974732,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629638098.9749017,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629638098.974926,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629638158.975152,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629638158.9752526,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629638158.9752827,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629638218.975333,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629638218.975432,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629638218.9754553,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629638267.6277177,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"default"}
{"level":"info","ts":1629638267.6278934,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"Start to sync device plugin ConfigMap"}
{"level":"info","ts":1629638267.6279035,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629638267.627908,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.627924,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629638267.6279492,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629638267.6279528,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629638267.627955,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6279585,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629638267.627971,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"ConfigMap already exists, updating"}
{"level":"info","ts":1629638267.631596,"logger":"controller_sriovnetworknodepolicy.syncPluginDaemonObjs","msg":"Start to sync sriov daemons objects"}
{"level":"info","ts":1629638267.6316183,"logger":"controller_sriovnetworknodepolicy.renderDsForCR","msg":"Start to render objects"}
{"level":"info","ts":1629638267.632879,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/22 13:17:47 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629638267.6343808,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/22 13:17:47 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629638267.636125,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"Role"}
2021/08/22 13:17:47 reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-sriov-network-operator/sriov-plugin
{"level":"info","ts":1629638267.6380086,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/22 13:17:47 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629638267.6393545,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/22 13:17:47 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629638267.6406083,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629638267.6409695,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-cni"}
{"level":"info","ts":1629638267.640996,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629638267.644984,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629638267.6453686,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-device-plugin"}
{"level":"info","ts":1629638267.6453967,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629638267.649014,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Start to sync all SriovNetworkNodeState custom resource"}
{"level":"info","ts":1629638267.6490457,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.6490588,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJtdy5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629638267.6490662,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bmw.oss.labs","cksum":"14308489"}
{"level":"info","ts":1629638267.6490788,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629638267.6490839,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bmw.oss.labs
{"level":"info","ts":1629638267.649091,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629638267.6538987,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.6539178,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMS5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629638267.653923,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc1.oss.labs","cksum":"14308489"}
{"level":"info","ts":1629638267.6539338,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629638267.658228,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6582558,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMi5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629638267.6582627,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc2.oss.labs","cksum":"14308489"}
{"level":"info","ts":1629638267.6582747,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629638267.658278,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bc2.oss.labs
{"level":"info","ts":1629638267.6582859,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629638267.6629725,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Remove SriovNetworkNodeState custom resource for unselected node"}
{"level":"info","ts":1629638267.6630354,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.6630418,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.6630447,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.6630468,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.6630492,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6630516,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.6630716,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"default"}
{"level":"info","ts":1629638267.6631525,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"Start to sync device plugin ConfigMap"}
{"level":"info","ts":1629638267.6631575,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629638267.663162,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629638267.6631646,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6631706,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629638267.6631827,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629638267.6631851,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.6631885,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629638267.6631975,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"ConfigMap already exists, updating"}
{"level":"info","ts":1629638267.6653793,"logger":"controller_sriovnetworknodepolicy.syncPluginDaemonObjs","msg":"Start to sync sriov daemons objects"}
{"level":"info","ts":1629638267.6653929,"logger":"controller_sriovnetworknodepolicy.renderDsForCR","msg":"Start to render objects"}
{"level":"info","ts":1629638267.6664975,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/22 13:17:47 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629638267.6681025,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/22 13:17:47 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629638267.6699662,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"Role"}
2021/08/22 13:17:47 reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-sriov-network-operator/sriov-plugin
{"level":"info","ts":1629638267.6714628,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/22 13:17:47 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629638267.6727664,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/22 13:17:47 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629638267.6742744,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629638267.6745884,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-cni"}
{"level":"info","ts":1629638267.6746097,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629638267.677522,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629638267.6778898,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-device-plugin"}
{"level":"info","ts":1629638267.677911,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629638267.6807034,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Start to sync all SriovNetworkNodeState custom resource"}
{"level":"info","ts":1629638267.6807172,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.6807246,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMS5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629638267.680729,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc1.oss.labs","cksum":"14308489"}
{"level":"info","ts":1629638267.6807365,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bc2.oss.labs
{"level":"info","ts":1629638267.6848483,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6848779,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMi5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629638267.684883,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc2.oss.labs","cksum":"14308489"}
{"level":"info","ts":1629638267.6848936,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629638267.684897,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.684902,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bmw.oss.labs
{"level":"info","ts":1629638267.6901236,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.6901412,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJtdy5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629638267.690147,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bmw.oss.labs","cksum":"14308489"}
{"level":"info","ts":1629638267.6901574,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629638267.6901608,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638267.690166,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629638267.6948147,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Remove SriovNetworkNodeState custom resource for unselected node"}
{"level":"info","ts":1629638267.6948452,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.6948497,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.694852,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6948545,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629638267.6948566,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629638267.6948688,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629638278.9757576,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629638278.9759607,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629638278.9760275,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}

I have to exclude some part of logs because the logs were too lengthy to be added here. The last parts for logs are:

{"level":"info","ts":1629798950.88408,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"default"}
{"level":"info","ts":1629798950.8842223,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"Start to sync device plugin ConfigMap"}
{"level":"info","ts":1629798950.8842301,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629798950.8842344,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.884249,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629798950.8842728,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629798950.8842762,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629798950.8842785,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.8842828,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629798950.8842974,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"ConfigMap already exists, updating"}
{"level":"info","ts":1629798950.88806,"logger":"controller_sriovnetworknodepolicy.syncPluginDaemonObjs","msg":"Start to sync sriov daemons objects"}
{"level":"info","ts":1629798950.8880808,"logger":"controller_sriovnetworknodepolicy.renderDsForCR","msg":"Start to render objects"}
{"level":"info","ts":1629798950.8892846,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:55:50 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629798950.891754,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:55:50 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629798950.8936634,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"Role"}
2021/08/24 09:55:50 reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-sriov-network-operator/sriov-plugin
{"level":"info","ts":1629798950.8952668,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:55:50 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629798950.8974936,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:55:50 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629798950.8993843,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629798950.8997533,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-cni"}
{"level":"info","ts":1629798950.899778,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629798950.903544,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629798950.9039264,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-device-plugin"}
{"level":"info","ts":1629798950.9039493,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629798950.9074335,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Start to sync all SriovNetworkNodeState custom resource"}
{"level":"info","ts":1629798950.9074469,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.9074602,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJtdy5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629798950.9074671,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bmw.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629798950.907478,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629798950.9074829,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bmw.oss.labs
{"level":"info","ts":1629798950.9074898,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629798950.9126375,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc1.oss.labs"}
{"level":"info","ts":1629798950.9126544,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMS5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629798950.9126627,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc1.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629798950.9126732,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629798950.916687,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.9167097,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMi5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629798950.916716,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc2.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629798950.9167275,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629798950.916732,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bc2.oss.labs
{"level":"info","ts":1629798950.916741,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629798950.921178,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Remove SriovNetworkNodeState custom resource for unselected node"}
{"level":"info","ts":1629798950.9212036,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.921209,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629798950.9212117,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.921214,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629798950.9212167,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.9212193,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.9212363,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"default"}
{"level":"info","ts":1629798950.9213061,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"Start to sync device plugin ConfigMap"}
{"level":"info","ts":1629798950.9213104,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629798950.921315,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629798950.921318,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.9213235,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629798950.9213362,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629798950.9213383,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.9213417,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629798950.92135,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"ConfigMap already exists, updating"}
{"level":"info","ts":1629798950.9235034,"logger":"controller_sriovnetworknodepolicy.syncPluginDaemonObjs","msg":"Start to sync sriov daemons objects"}
{"level":"info","ts":1629798950.923521,"logger":"controller_sriovnetworknodepolicy.renderDsForCR","msg":"Start to render objects"}
{"level":"info","ts":1629798950.9247584,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:55:50 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629798950.926482,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:55:50 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629798950.9279516,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"Role"}
2021/08/24 09:55:50 reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-sriov-network-operator/sriov-plugin
{"level":"info","ts":1629798950.9292052,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:55:50 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629798950.930505,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:55:50 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629798950.9318504,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629798950.9321682,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-cni"}
{"level":"info","ts":1629798950.9321914,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629798950.9348562,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629798950.935224,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-device-plugin"}
{"level":"info","ts":1629798950.9352474,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629798950.9379547,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Start to sync all SriovNetworkNodeState custom resource"}
{"level":"info","ts":1629798950.9379709,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc1.oss.labs"}
{"level":"info","ts":1629798950.9379797,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMS5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629798950.9379854,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc1.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629798950.9379952,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629798950.941772,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.9417868,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMi5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629798950.9417922,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc2.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629798950.9418023,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629798950.9418085,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bc2.oss.labs
{"level":"info","ts":1629798950.9418235,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629798950.9470184,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.9470346,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJtdy5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629798950.9470391,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bmw.oss.labs","cksum":"16317396"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bmw.oss.labs
{"level":"info","ts":1629798950.9470484,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629798950.9470518,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.9470565,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629798950.9514735,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Remove SriovNetworkNodeState custom resource for unselected node"}
{"level":"info","ts":1629798950.9514992,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629798950.9515038,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.9515064,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629798950.9515088,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629798950.9515111,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629798950.9515135,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629798959.908304,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629798959.9083822,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629798959.9084094,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629799019.9085603,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629799019.9086528,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629799019.908686,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629799079.9089248,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629799079.9090354,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629799079.909063,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629799116.924967,"logger":"controller_sriovoperatorconfig","msg":"Reconciling SriovOperatorConfig","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"default"}
{"level":"info","ts":1629799116.925008,"logger":"controller_sriovoperatorconfig.syncWebhookObjs","msg":"Start to sync webhook objects"}
{"level":"info","ts":1629799116.9263985,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (/v1, Kind=Service) openshift-sriov-network-operator/network-resources-injector-service
{"level":"info","ts":1629799116.92932,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/network-resources-injector-sa
{"level":"info","ts":1629799116.9313095,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /network-resources-injector
{"level":"info","ts":1629799116.9331026,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /network-resources-injector-role-binding
{"level":"info","ts":1629799116.9343505,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
{"level":"info","ts":1629799116.9344573,"logger":"controller_sriovoperatorconfig.syncMutatingWebhook","msg":"Start to sync mutating webhook","Name":"network-resources-injector-config","Namespace":"openshift-sriov-network-operator"}
{"level":"info","ts":1629799116.9344742,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
{"level":"info","ts":1629799116.93451,"logger":"controller_sriovoperatorconfig.syncWebhookConfigMap","msg":"Start to sync config map","Name":"injector-service-ca","Namespace":"openshift-sriov-network-operator"}
{"level":"info","ts":1629799116.9345217,"logger":"controller_sriovoperatorconfig.syncWebhookConfigMap","msg":"Webhook ConfigMap already exists, updating"}
{"level":"info","ts":1629799116.9369898,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (apps/v1, Kind=DaemonSet) openshift-sriov-network-operator/network-resources-injector
{"level":"info","ts":1629799116.9400923,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (/v1, Kind=Service) openshift-sriov-network-operator/operator-webhook-service
{"level":"info","ts":1629799116.9414656,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/operator-webhook-sa
{"level":"info","ts":1629799116.9429166,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRole) /operator-webhook
{"level":"info","ts":1629799116.9441836,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding) /operator-webhook-role-binding
{"level":"info","ts":1629799116.9457011,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
{"level":"info","ts":1629799116.9458077,"logger":"controller_sriovoperatorconfig.syncMutatingWebhook","msg":"Start to sync mutating webhook","Name":"operator-webhook-config","Namespace":"openshift-sriov-network-operator"}
{"level":"info","ts":1629799116.945823,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
{"level":"info","ts":1629799116.9458978,"logger":"controller_sriovoperatorconfig.syncValidatingWebhook","msg":"Start to sync validating webhook","Name":"operator-webhook-config","Namespace":"openshift-sriov-network-operator"}
{"level":"info","ts":1629799116.945911,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
{"level":"info","ts":1629799116.9459455,"logger":"controller_sriovoperatorconfig.syncWebhookConfigMap","msg":"Start to sync config map","Name":"webhook-service-ca","Namespace":"openshift-sriov-network-operator"}
{"level":"info","ts":1629799116.9459553,"logger":"controller_sriovoperatorconfig.syncWebhookConfigMap","msg":"Webhook ConfigMap already exists, updating"}
{"level":"info","ts":1629799116.9487147,"logger":"controller_sriovoperatorconfig.syncWebhookObject","msg":"Start to sync Objects"}
2021/08/24 09:58:36 reconciling (apps/v1, Kind=DaemonSet) openshift-sriov-network-operator/operator-webhook
{"level":"info","ts":1629799116.951043,"logger":"controller_sriovoperatorconfig.syncConfigDaemonset","msg":"Start to sync config daemonset"}
2021/08/24 09:58:36 reconciling (apps/v1, Kind=DaemonSet) openshift-sriov-network-operator/sriov-network-config-daemon
{"level":"info","ts":1629799139.9091551,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629799139.909252,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629799139.909275,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}
{"level":"info","ts":1629799145.4402616,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"intel-dpdk-node-policy-for-testpmd"}
{"level":"info","ts":1629799145.4404497,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"Start to sync device plugin ConfigMap"}
{"level":"info","ts":1629799145.4404583,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629799145.440462,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.440477,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629799145.440503,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629799145.4405055,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.440509,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629799145.4405162,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629799145.440526,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"ConfigMap already exists, updating"}
{"level":"info","ts":1629799145.443519,"logger":"controller_sriovnetworknodepolicy.syncPluginDaemonObjs","msg":"Start to sync sriov daemons objects"}
{"level":"info","ts":1629799145.4435358,"logger":"controller_sriovnetworknodepolicy.renderDsForCR","msg":"Start to render objects"}
{"level":"info","ts":1629799145.4447653,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:59:05 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629799145.4468398,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:59:05 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629799145.4485657,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"Role"}
2021/08/24 09:59:05 reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-sriov-network-operator/sriov-plugin
{"level":"info","ts":1629799145.450834,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:59:05 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629799145.4527006,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:59:05 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629799145.4546866,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629799145.4550476,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-cni"}
{"level":"info","ts":1629799145.4550774,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629799145.4586332,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629799145.4590194,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-device-plugin"}
{"level":"info","ts":1629799145.459044,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629799145.4622056,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Start to sync all SriovNetworkNodeState custom resource"}
{"level":"info","ts":1629799145.4622204,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.4622319,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMi5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629799145.4622376,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc2.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629799145.4622486,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629799145.4622517,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.462259,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bc2.oss.labs
apply policy intel-dpdk-node-policy-for-testpmd for node r192bmw.oss.labs
{"level":"info","ts":1629799145.469403,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.4694223,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJtdy5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629799145.4694283,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bmw.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629799145.4694428,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629799145.4694607,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.4694674,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629799145.474742,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc1.oss.labs"}
{"level":"info","ts":1629799145.4747603,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMS5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629799145.4747658,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc1.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629799145.4747758,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629799145.4797359,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Remove SriovNetworkNodeState custom resource for unselected node"}
{"level":"info","ts":1629799145.4797735,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.4797783,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.4797807,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629799145.479783,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.4797854,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.4797878,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.4798067,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"default"}
{"level":"info","ts":1629799145.47991,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"Start to sync device plugin ConfigMap"}
{"level":"info","ts":1629799145.4799156,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629799145.4799185,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.4799263,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629799145.4799445,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629799145.4799473,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Start to render device plugin config data"}
{"level":"info","ts":1629799145.4799492,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.4799528,"logger":"controller_sriovnetworknodepolicy.renderDevicePluginConfigData","msg":"Add resource","Resource":{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null},"Resource list":[{"resourceName":"intelnics","selectors":{"pfNames":["ens3f1"],"IsRdma":true},"SelectorObj":null}]}
{"level":"info","ts":1629799145.4799619,"logger":"controller_sriovnetworknodepolicy.syncDevicePluginConfigMap","msg":"ConfigMap already exists, updating"}
{"level":"info","ts":1629799145.4823148,"logger":"controller_sriovnetworknodepolicy.syncPluginDaemonObjs","msg":"Start to sync sriov daemons objects"}
{"level":"info","ts":1629799145.4823294,"logger":"controller_sriovnetworknodepolicy.renderDsForCR","msg":"Start to render objects"}
{"level":"info","ts":1629799145.4835346,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:59:05 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629799145.4857688,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"ServiceAccount"}
2021/08/24 09:59:05 reconciling (/v1, Kind=ServiceAccount) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629799145.4878898,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"Role"}
2021/08/24 09:59:05 reconciling (rbac.authorization.k8s.io/v1, Kind=Role) openshift-sriov-network-operator/sriov-plugin
{"level":"info","ts":1629799145.4897034,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:59:05 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-cni
{"level":"info","ts":1629799145.4912925,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"RoleBinding"}
2021/08/24 09:59:05 reconciling (rbac.authorization.k8s.io/v1, Kind=RoleBinding) openshift-sriov-network-operator/sriov-device-plugin
{"level":"info","ts":1629799145.49281,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629799145.4931638,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-cni"}
{"level":"info","ts":1629799145.493189,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
{"level":"info","ts":1629799145.496779,"logger":"controller_sriovnetworknodepolicy.syncDsObject","msg":"Start to sync Objects","Kind":"DaemonSet"}
{"level":"info","ts":1629799145.4971924,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"Start to sync DaemonSet","Namespace":"openshift-sriov-network-operator","Name":"sriov-device-plugin"}
{"level":"info","ts":1629799145.4972224,"logger":"controller_sriovnetworknodepolicy.syncDaemonSet","msg":"DaemonSet already exists, updating"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bmw.oss.labs
{"level":"info","ts":1629799145.5003223,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Start to sync all SriovNetworkNodeState custom resource"}
{"level":"info","ts":1629799145.50034,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.5003495,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJtdy5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629799145.5003538,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bmw.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629799145.5003653,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629799145.5003686,"logger":"sriovnetwork","msg":"Selected():","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.500375,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629799145.5049925,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc1.oss.labs"}
{"level":"info","ts":1629799145.50501,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMS5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629799145.5050154,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc1.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629799145.5050266,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
apply policy intel-dpdk-node-policy-for-testpmd for node r192bc2.oss.labs
{"level":"info","ts":1629799145.5090709,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Sync SriovNetworkNodeState CR","name":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.5090876,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"SriovNetworkNodeState CR","content":"eyJtZXRhZGF0YSI6eyJuYW1lIjoicjE5MmJjMi5vc3MubGFicyIsIm5hbWVzcGFjZSI6Im9wZW5zaGlmdC1zcmlvdi1uZXR3b3JrLW9wZXJhdG9yIiwiY3JlYXRpb25UaW1lc3RhbXAiOm51bGx9LCJzcGVjIjp7fSwic3RhdHVzIjp7fX0="}
{"level":"info","ts":1629799145.5090928,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"Start to sync SriovNetworkNodeState","Name":"r192bc2.oss.labs","cksum":"16317396"}
{"level":"info","ts":1629799145.5091054,"logger":"controller_sriovnetworknodepolicy.syncSriovNetworkNodeState","msg":"SriovNetworkNodeState already exists, updating"}
{"level":"info","ts":1629799145.5091088,"logger":"sriovnetwork","msg":"Selected():","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.5091143,"logger":"sriovnetwork","msg":"Update interface","name:":"ens3f1"}
{"level":"info","ts":1629799145.5137346,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"Remove SriovNetworkNodeState custom resource for unselected node"}
{"level":"info","ts":1629799145.5137675,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.513772,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc1.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629799145.5137746,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799145.5137768,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc1.oss.labs"}
{"level":"info","ts":1629799145.5137794,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bc2.oss.labs","node":"r192bc2.oss.labs"}
{"level":"info","ts":1629799145.513782,"logger":"controller_sriovnetworknodepolicy.syncAllSriovNetworkNodeStates","msg":"validate","SriovNetworkNodeState":"r192bmw.oss.labs","node":"r192bmw.oss.labs"}
{"level":"info","ts":1629799199.9094217,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"webhook-service-ca"}
{"level":"info","ts":1629799199.9095073,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca"}
{"level":"info","ts":1629799199.9095345,"logger":"controller_caconfig","msg":"Couldn't find","Request.Namespace":"openshift-sriov-network-operator","Request.Name":"injector-service-ca","validate webhook config:":"network-resources-injector-config"}

Plus @jtaleric I couldn't understand your concern on capturing ansible logs. Could you please elaborate? Thanks

@HughNhan
Copy link
Collaborator

@MuhammadMunir12, find your benchmark operator pod i.e "benchmark-controller-manager-xxxxx-xxxx". Then capture the log as I have mentioned above, "oc logs [your-operator-pod] -c manager".

@MuhammadMunir12
Copy link
Author

@HughNhan Here's the output and it does not show any issues.

$ oc logs benchmark-controller-manager-fbc89d7d9-56kfj -c manager
{"level":"info","ts":1629374505.2774656,"logger":"cmd","msg":"Version","Go Version":"go1.16.6","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.9.0","commit":"205e0a0c2df0715d133fbe2741db382c9c75a341"}
{"level":"info","ts":1629374505.2781386,"logger":"cmd","msg":"Environment variable OPERATOR_NAME has been deprecated, use --leader-election-id instead."}
{"level":"info","ts":1629374505.2781458,"logger":"cmd","msg":"Ignoring OPERATOR_NAME environment variable since --leader-election-id is set"}
{"level":"info","ts":1629374505.2781591,"logger":"cmd","msg":"Watching single namespace.","Namespace":"benchmark-operator"}
I0819 12:01:46.329475       7 request.go:655] Throttling request took 1.044510858s, request: GET:https://172.30.0.1:443/apis/ripsaw.cloudbulldozer.io/v1alpha1?timeout=32s
{"level":"info","ts":1629374512.0829346,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1629374512.0835712,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"benchmark-operator","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
{"level":"info","ts":1629374512.0835853,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"ripsaw.cloudbulldozer.io","Options.Version":"v1alpha1","Options.Kind":"Benchmark"}
{"level":"info","ts":1629374512.0847297,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
I0819 12:01:52.084958       7 leaderelection.go:243] attempting to acquire leader lease benchmark-operator/ripsaw...
{"level":"info","ts":1629374512.0849795,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0819 12:02:09.497906       7 leaderelection.go:253] successfully acquired lease benchmark-operator/ripsaw
{"level":"info","ts":1629374529.4986346,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting EventSource","source":"kind source: ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark"}
{"level":"info","ts":1629374529.5996294,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting Controller"}
{"level":"info","ts":1629374529.5997007,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting workers","worker count":1}

@HughNhan
Copy link
Collaborator

HughNhan commented Aug 24, 2021

@MuhammadMunir12 - the logs above was from the beginning, deploying operator. Can you capture the logs during applying your CR. Having said that, I am seeing your earlier logs were:
NAME READY STATUS RESTARTS AGE
benchmark-operator-74ddd858c-rprkm 2/2 Running 0 6m1s
uperf-server-r192bc1.oss.labs-0-8712d0a5-2wpc6 1/1 Running 0 43s
This is from the older Operator.

Then you showed:
"oc logs benchmark-controller-manager-fbc89d7d9-56kfj -c manager"
This is the current Operator (I learned about the Operator changes as I started to look into your Issue)
Let's continue with the new Operator, applying the CR and capture the logs.

@MuhammadMunir12
Copy link
Author

MuhammadMunir12 commented Aug 25, 2021

@HughNhan What I understand is: I have to delete the existing operator (benchmark operator) and redeploy it and then run my CR for uperf as a benchmark. Right?
My current operator is installed using helm charts as shown below:

$ helm list
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/core/openshift/auth/kubeconfig
NAME                    NAMESPACE       REVISION        UPDATED                                         STATUS          CHART                           APP VERSION
benchmark-operator      my-ripsaw       1               2021-08-06 21:13:50.305495078 +0200 +0200       deployed        benchmark-operator-0.1.0        1.16.0

@HughNhan
Copy link
Collaborator

@MuhammadMunir12 - Let us start fresh.

  1. oc delete namespace my-ripsaw benchmark-operator
  2. clone the repos, https://github.com/cloud-bulldozer/benchmark-operator.git
  3. 'cd benchmark-operator' and 'make deploy'
  4. Deploy your CR while capturing logs of '"oc logs benchmark-controller-manager- -c manager"

@MuhammadMunir12
Copy link
Author

@HughNhan Hi, I have recreated the environment from scratch.

  1. Deleted previous namespace, benchmark, and cloned repo.
  2. Recreated new namespace and recloned the git repo.

Helm doesn't install in benchmark-operator as mentioned in this command:
$ helm install benchmark-operator . -n benchmark-operator --create-namespace
So I had to create new namespace "my-ripsaw" and it works there.

$ oc adm policy -n benchmark-operator add-scc-to-user privileged -z my-ripsaw
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "my-ripsaw"
$ helm install benchmark-operator . -n my-ripsaw --create-namespace
walk.go:74: found symbolic link in path: /home/core/benchmark-operator/charts/benchmark-operator/crds/crd.yaml resolves to /home/core/benchmark-operator/config/crd/bases/ripsaw.cloudbulldozer.io_benchmarks.yaml
NAME: benchmark-operator
LAST DEPLOYED: Thu Aug 26 07:17:32 2021
NAMESPACE: my-ripsaw
STATUS: deployed
REVISION: 1
TEST SUITE: None

$ helm list
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
benchmark-operator      my-ripsaw       1               2021-08-26 07:17:32.785840401 -0500 CDT deployed        benchmark-operator-0.1.0        1.16.0

$ oc get pods -o wide -n benchmark-operator
NAME                                            READY   STATUS    RESTARTS   AGE     IP            NODE               NOMINATED NODE   READINESS GATES
benchmark-controller-manager-59cdbb75f7-h6f4t   2/2     Running   0          7m49s   10.131.0.43   r192bc2.oss.labs   <none>           <none>
$ oc get po
NAME                                             READY   STATUS    RESTARTS   AGE
benchmark-operator-77fbb45457-5txvp              2/2     Running   0          19m
uperf-server-r192bc1.oss.labs-0-d105c1ec-vrwwx   1/1     Running   0          2m24s

$ oc get benchmark
NAME    TYPE    STATE              METADATA STATE   SYSTEM METRICS   UUID                                   AGE
uperf   uperf   Starting Clients   not collected    Not collected    d105c1ec-cafd-526d-b7a1-28f1f69e646c   104s

I am still facing the same issue as mentioned above.

The logs are given below:
"oc logs benchmark-controller-manager-59cdbb75f7-h6f4t -c manager -n benchmark-operator"

{"level":"info","ts":1629980295.9051187,"logger":"cmd","msg":"Version","Go Version":"go1.16.6","GOOS":"linux","GOARCH":"amd64","ansible-operator":"v1.9.0","commit":"205e0a0c2df0715d133fbe2741db382c9c75a341"}
{"level":"info","ts":1629980295.9055843,"logger":"cmd","msg":"Environment variable OPERATOR_NAME has been deprecated, use --leader-election-id instead."}
{"level":"info","ts":1629980295.9055965,"logger":"cmd","msg":"Ignoring OPERATOR_NAME environment variable since --leader-election-id is set"}
{"level":"info","ts":1629980295.9056163,"logger":"cmd","msg":"Watching single namespace.","Namespace":"benchmark-operator"}
I0826 12:18:16.956708       7 request.go:655] Throttling request took 1.043424115s, request: GET:https://172.30.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s
{"level":"info","ts":1629980298.5639784,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1629980298.564859,"logger":"cmd","msg":"Environment variable not set; using default value","Namespace":"benchmark-operator","envVar":"ANSIBLE_DEBUG_LOGS","ANSIBLE_DEBUG_LOGS":false}
{"level":"info","ts":1629980298.5648856,"logger":"ansible-controller","msg":"Watching resource","Options.Group":"ripsaw.cloudbulldozer.io","Options.Version":"v1alpha1","Options.Kind":"Benchmark"}
{"level":"info","ts":1629980298.5661426,"logger":"proxy","msg":"Starting to serve","Address":"127.0.0.1:8888"}
I0826 12:18:18.566324       7 leaderelection.go:243] attempting to acquire leader lease benchmark-operator/ripsaw...
{"level":"info","ts":1629980298.5663693,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I0826 12:18:49.115011       7 leaderelection.go:253] successfully acquired lease benchmark-operator/ripsaw
{"level":"info","ts":1629980329.1152506,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting EventSource","source":"kind source: ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark"}
{"level":"info","ts":1629980329.215665,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting Controller"}
{"level":"info","ts":1629980329.2157204,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting workers","worker count":1}

Looking forward to your response.

@HughNhan
Copy link
Collaborator

@MuhammadMunir12 - currently, there might be issue(s) with helm method. Not sure if you are seeing the same, but please do 1-4 as recommend above. Especially, step 3 "make deploy" (no helm)

@MuhammadMunir12
Copy link
Author

@HughNhan Helm does not work with this command as mentioned in the operator's guide "helm install benchmark-operator . -n benchmark-operator --create-namespace". Have to create another namespace and it works. Besides this I didn't see any issue with helm charts.

Deleted namespace my-ripsaw

$ oc delete namespaces my-ripsaw
namespace "my-ripsaw" deleted

I have cloned the operator from the repo.
(https://github.com/cloud-bulldozer/benchmark-operator.git)

Then run "make deploy" in the benchmark-operator directory.

$ make deploy
cd config/manager && /home/core/benchmark-operator/bin/kustomize edit set image controller=quay.io/cloud-bulldozer/benchmark-operator:latest
/home/core/benchmark-operator/bin/kustomize build config/default | kubectl apply -f -
namespace/benchmark-operator unchanged
customresourcedefinition.apiextensions.k8s.io/benchmarks.ripsaw.cloudbulldozer.io unchanged
serviceaccount/benchmark-operator unchanged
podsecuritypolicy.policy/benchmark-privileged unchanged
role.rbac.authorization.k8s.io/benchmark-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/benchmark-manager-role unchanged
clusterrole.rbac.authorization.k8s.io/benchmark-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/benchmark-proxy-role unchanged
rolebinding.rbac.authorization.k8s.io/benchmark-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/benchmark-manager-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/benchmark-manager-self-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/benchmark-proxy-rolebinding unchanged
configmap/benchmark-manager-config unchanged
service/benchmark-controller-manager-metrics-service unchanged
deployment.apps/benchmark-controller-manager configured

Deployed Uperf CR

It is not working.

$ oc get benchmark
NAME    TYPE    STATE   METADATA STATE   SYSTEM METRICS   UUID   AGE
uperf   uperf                            Not collected           109s

So, what I understand is, we have to run it via helm charts.

@HughNhan
Copy link
Collaborator

HughNhan commented Aug 30, 2021

MuhammadMunir12 - The namespace to use after "make deploy" is "benchmark-operator". Make sure your CR uses it. Also include the logs like you did before "oc logs benchmark-controller-manager-xxxx -c manager" if there is ANY issue. We cannot debug with just "It is not working"

@MuhammadMunir12
Copy link
Author

@HughNhan I have updated its namespace, I earlier run it under openshift-sriov-network-operator namespace.

The benchmark is running now under: benchmark-operator namespace.

Still the client pods are in starting state and only server pod is running.

Logs from "oc logs benchmark-controller-manager-59cdbb75f7-4vtwk -c manager -n benchmark-operator "
show that an ansible playbook that benchmark runs is failing.

The logs end at:

----- Ansible Task Status Event StdOut (ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark, uperf/benchmark-operator) -----


PLAY RECAP *********************************************************************
localhost                  : ok=10   changed=0    unreachable=0    failed=1    skipped=23   rescued=1    ignored=0


----------
{"level":"error","ts":1630388206.7538419,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Reconciler error","name":"uperf","namespace":"benchmark-operator","error":"event runner on failed","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.3/pkg/internal/controller/controller.go:302\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.3/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.8.3/pkg/internal/controller/controller.go:216\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/go/pkg/mod/k8s.io/apimachinery@v0.20.2/pkg/util/wait/wait.go:99"}

Whereas the detailed logs are:

--------------------------- Ansible Task StdOut -------------------------------

TASK [Get current state] *******************************************************
task path: /opt/ansible/playbooks/benchmark.yml:7

-------------------------------------------------------------------------------
{"level":"info","ts":1630388046.5482695,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"Get current state"}
{"level":"info","ts":1630388047.3077278,"logger":"proxy","msg":"Read object from cache","resource":{"IsResourceRequest":true,"Path":"/apis/ripsaw.cloudbulldozer.io/v1alpha1/namespaces/benchmark-operator/benchmarks/uperf","Verb":"get","APIPrefix":"apis","APIGroup":"ripsaw.cloudbulldozer.io","APIVersion":"v1alpha1","Namespace":"benchmark-operator","Resource":"benchmarks","Subresource":"","Name":"uperf","Parts":["benchmarks","uperf"]}}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : ceph_osd_cache_drop] **************************************
task path: /opt/ansible/playbooks/benchmark.yml:16

-------------------------------------------------------------------------------
{"level":"info","ts":1630388047.4003005,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : ceph_osd_cache_drop"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : kernel_cache_drop] ****************************************
task path: /opt/ansible/playbooks/benchmark.yml:20

-------------------------------------------------------------------------------
{"level":"info","ts":1630388047.4654365,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : kernel_cache_drop"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Capture operator information] ********************************************
task path: /opt/ansible/playbooks/benchmark.yml:24

-------------------------------------------------------------------------------
{"level":"info","ts":1630388047.5283823,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"Capture operator information"}
{"level":"info","ts":1630388048.3908024,"logger":"proxy","msg":"Read object from cache","resource":{"IsResourceRequest":true,"Path":"/api/v1/namespaces/benchmark-operator/pods","Verb":"list","APIPrefix":"api","APIGroup":"","APIVersion":"v1","Namespace":"benchmark-operator","Resource":"pods","Subresource":"","Name":"","Parts":["pods"]}}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : uuid] *****************************************************
task path: /opt/ansible/playbooks/benchmark.yml:36

-------------------------------------------------------------------------------
{"level":"info","ts":1630388048.499797,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : uuid"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Setting the uuid for the benchmark] **************************************
task path: /opt/ansible/playbooks/benchmark.yml:39

-------------------------------------------------------------------------------
{"level":"info","ts":1630388048.5672677,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"Setting the uuid for the benchmark"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : backpack] *************************************************
task path: /opt/ansible/playbooks/benchmark.yml:74

-------------------------------------------------------------------------------
{"level":"info","ts":1630388048.7116446,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : backpack"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [operator_sdk.util.k8s_status] ********************************************
task path: /opt/ansible/playbooks/benchmark.yml:78

-------------------------------------------------------------------------------
{"level":"info","ts":1630388048.7845578,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"operator_sdk.util.k8s_status"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : common] ***************************************************
task path: /opt/ansible/playbooks/benchmark.yml:91

-------------------------------------------------------------------------------
{"level":"info","ts":1630388048.8569582,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : common"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [common : Get Network Policy] *********************************************
task path: /opt/ansible/roles/common/tasks/main.yml:3

-------------------------------------------------------------------------------
{"level":"info","ts":1630388048.9362886,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"common : Get Network Policy"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [common : Create Network policy if enabled] *******************************
task path: /opt/ansible/roles/common/tasks/main.yml:11

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.0113218,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"common : Create Network policy if enabled"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Set Building state] ******************************************************
task path: /opt/ansible/playbooks/benchmark.yml:94

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.088304,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"Set Building state"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : {{ workload.name }}] **************************************
task path: /opt/ansible/playbooks/benchmark.yml:102

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.1607623,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : {{ workload.name }}"}
{"level":"info","ts":1630388049.303652,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:3

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : List Nodes Labeled as Workers] ***********************************
task path: /opt/ansible/roles/uperf/tasks/setup.yml:7

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.345255,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : List Nodes Labeled as Workers"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : List Nodes Labeled with {{ workload_args.exclude_label }}] *******
task path: /opt/ansible/roles/uperf/tasks/setup.yml:20

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.4053884,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : List Nodes Labeled with {{ workload_args.exclude_label }}"}

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Add "Pin" server and client node to worker list.] ********************************
ok: [localhost] => (item=r192bc1.oss.labs) => {
    "ansible_facts": {
        "worker_node_list": [
            "r192bc1.oss.labs"
        ]
    },
    "ansible_loop_var": "item",
    "changed": false,
    "item": "r192bc1.oss.labs"
}

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Add "Pin" server and client node to worker list.] ********************************
ok: [localhost] => (item=r192bc1.oss.labs) => {
    "ansible_facts": {
        "worker_node_list": [
            "r192bc1.oss.labs",
            "r192bc1.oss.labs"
        ]
    },
    "ansible_loop_var": "item",
    "changed": false,
    "item": "r192bc1.oss.labs"
}

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Capture ServiceIP] ***********************************************
task path: /opt/ansible/roles/uperf/tasks/setup.yml:122

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.8033717,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : Capture ServiceIP"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:5

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.8290696,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:10

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.8551884,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Get server pods] *************************************************
task path: /opt/ansible/roles/uperf/tasks/wait_server_ready.yml:5

-------------------------------------------------------------------------------
{"level":"info","ts":1630388049.8935955,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : Get server pods"}
{"level":"info","ts":1630388050.5700636,"logger":"proxy","msg":"Read object from cache","resource":{"IsResourceRequest":true,"Path":"/api/v1/namespaces/benchmark-operator/pods","Verb":"list","APIPrefix":"api","APIGroup":"","APIVersion":"v1","Namespace":"benchmark-operator","Resource":"pods","Subresource":"","Name":"","Parts":["pods"]}}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Update resource state] *******************************************
task path: /opt/ansible/roles/uperf/tasks/wait_server_ready.yml:14

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.6866033,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : Update resource state"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Wait for vms to be running....] **********************************
task path: /opt/ansible/roles/uperf/tasks/wait_server_ready.yml:29

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.7182832,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : Wait for vms to be running...."}
{"level":"info","ts":1630388050.7501733,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : Update resource state"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Update resource state] *******************************************
task path: /opt/ansible/roles/uperf/tasks/wait_server_ready.yml:38

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.782344,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"blocking client from running uperf"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [blocking client from running uperf] **************************************
task path: /opt/ansible/roles/uperf/tasks/wait_server_ready.yml:48

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:13

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.814401,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388050.8436625,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:16

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:22

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.8717268,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388050.8995357,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:25

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.9276745,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:29

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:36

-------------------------------------------------------------------------------
{"level":"info","ts":1630388050.955355,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388050.983659,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:39

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:50

-------------------------------------------------------------------------------
{"level":"info","ts":1630388051.0235195,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:53

-------------------------------------------------------------------------------
{"level":"info","ts":1630388051.053301,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:56

-------------------------------------------------------------------------------
{"level":"info","ts":1630388051.0797327,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388051.1053662,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:59

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:62

-------------------------------------------------------------------------------
{"level":"info","ts":1630388051.1313999,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388051.1568048,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:65

-------------------------------------------------------------------------------
{"level":"info","ts":1630388051.1825569,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"7827257152212568400","EventData.Name":"include_role : system-metrics"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : system-metrics] *******************************************
task path: /opt/ansible/playbooks/benchmark.yml:111

-------------------------------------------------------------------------------
{"level":"info","ts":1630388051.4055846,"logger":"runner","msg":"Ansible-runner exited successfully","job":"7827257152212568400","name":"uperf","namespace":"benchmark-operator"}

----- Ansible Task Status Event StdOut (ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark, uperf/benchmark-operator) -----


PLAY RECAP *********************************************************************
localhost                  : ok=9    changed=0    unreachable=0    failed=0    skipped=40   rescued=0    ignored=0


----------


----------
{"level":"info","ts":1630388059.91983,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"Get current state"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Get current state] *******************************************************
task path: /opt/ansible/playbooks/benchmark.yml:7

-------------------------------------------------------------------------------
{"level":"info","ts":1630388060.67005,"logger":"proxy","msg":"Read object from cache","resource":{"IsResourceRequest":true,"Path":"/apis/ripsaw.cloudbulldozer.io/v1alpha1/namespaces/benchmark-operator/benchmarks/uperf","Verb":"get","APIPrefix":"apis","APIGroup":"ripsaw.cloudbulldozer.io","APIVersion":"v1alpha1","Namespace":"benchmark-operator","Resource":"benchmarks","Subresource":"","Name":"uperf","Parts":["benchmarks","uperf"]}}
{"level":"info","ts":1630388060.7629437,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : ceph_osd_cache_drop"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : ceph_osd_cache_drop] **************************************
task path: /opt/ansible/playbooks/benchmark.yml:16

-------------------------------------------------------------------------------
{"level":"info","ts":1630388060.8280022,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : kernel_cache_drop"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : kernel_cache_drop] ****************************************
task path: /opt/ansible/playbooks/benchmark.yml:20

-------------------------------------------------------------------------------
{"level":"info","ts":1630388060.89464,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"Capture operator information"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Capture operator information] ********************************************
task path: /opt/ansible/playbooks/benchmark.yml:24

-------------------------------------------------------------------------------
{"level":"info","ts":1630388061.7469473,"logger":"proxy","msg":"Read object from cache","resource":{"IsResourceRequest":true,"Path":"/api/v1/namespaces/benchmark-operator/pods","Verb":"list","APIPrefix":"api","APIGroup":"","APIVersion":"v1","Namespace":"benchmark-operator","Resource":"pods","Subresource":"","Name":"","Parts":["pods"]}}
{"level":"info","ts":1630388061.8532262,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : uuid"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : uuid] *****************************************************
task path: /opt/ansible/playbooks/benchmark.yml:36

-------------------------------------------------------------------------------
{"level":"info","ts":1630388061.9207306,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"Setting the uuid for the benchmark"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Setting the uuid for the benchmark] **************************************
task path: /opt/ansible/playbooks/benchmark.yml:39

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.0656447,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : backpack"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : backpack] *************************************************
task path: /opt/ansible/playbooks/benchmark.yml:74

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.1384456,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"operator_sdk.util.k8s_status"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [operator_sdk.util.k8s_status] ********************************************
task path: /opt/ansible/playbooks/benchmark.yml:78

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.2116613,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : common"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : common] ***************************************************
task path: /opt/ansible/playbooks/benchmark.yml:91

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [common : Get Network Policy] *********************************************
task path: /opt/ansible/roles/common/tasks/main.yml:3

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.2925026,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"common : Get Network Policy"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [common : Create Network policy if enabled] *******************************
task path: /opt/ansible/roles/common/tasks/main.yml:11

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.368399,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"common : Create Network policy if enabled"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Set Building state] ******************************************************
task path: /opt/ansible/playbooks/benchmark.yml:94

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.4442978,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"Set Building state"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : {{ workload.name }}] **************************************
task path: /opt/ansible/playbooks/benchmark.yml:102

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.519526,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : {{ workload.name }}"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:3

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.6609912,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388062.702125,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : List Nodes Labeled as Workers"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : List Nodes Labeled as Workers] ***********************************
task path: /opt/ansible/roles/uperf/tasks/setup.yml:7

-------------------------------------------------------------------------------
{"level":"info","ts":1630388062.762267,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : List Nodes Labeled with {{ workload_args.exclude_label }}"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : List Nodes Labeled with {{ workload_args.exclude_label }}] *******
task path: /opt/ansible/roles/uperf/tasks/setup.yml:20

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Add "Pin" server and client node to worker list.] ********************************
ok: [localhost] => (item=r192bc1.oss.labs) => {
    "ansible_facts": {
        "worker_node_list": [
            "r192bc1.oss.labs"
        ]
    },
    "ansible_loop_var": "item",
    "changed": false,
    "item": "r192bc1.oss.labs"
}

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Add "Pin" server and client node to worker list.] ********************************
ok: [localhost] => (item=r192bc1.oss.labs) => {
    "ansible_facts": {
        "worker_node_list": [
            "r192bc1.oss.labs",
            "r192bc1.oss.labs"
        ]
    },
    "ansible_loop_var": "item",
    "changed": false,
    "item": "r192bc1.oss.labs"
}

-------------------------------------------------------------------------------
{"level":"info","ts":1630388063.1882157,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : Capture ServiceIP"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Capture ServiceIP] ***********************************************
task path: /opt/ansible/roles/uperf/tasks/setup.yml:122

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:5

-------------------------------------------------------------------------------
{"level":"info","ts":1630388063.216841,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:10

-------------------------------------------------------------------------------
{"level":"info","ts":1630388063.2449946,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : include_tasks"}
{"level":"info","ts":1630388063.2756739,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : include_tasks"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : include_tasks] ***************************************************
task path: /opt/ansible/roles/uperf/tasks/main.yml:13

-------------------------------------------------------------------------------
{"level":"info","ts":1630388063.3196073,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : Get pod info"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Get pod info] ****************************************************
task path: /opt/ansible/roles/uperf/tasks/start_client.yml:3

-------------------------------------------------------------------------------
{"level":"info","ts":1630388063.9924686,"logger":"proxy","msg":"Read object from cache","resource":{"IsResourceRequest":true,"Path":"/api/v1/namespaces/benchmark-operator/pods","Verb":"list","APIPrefix":"api","APIGroup":"","APIVersion":"v1","Namespace":"benchmark-operator","Resource":"pods","Subresource":"","Name":"","Parts":["pods"]}}

--------------------------- Ansible Task StdOut -------------------------------

TASK [Generate uperf xml files] ************************************************
task path: /opt/ansible/roles/uperf/tasks/start_client.yml:12

-------------------------------------------------------------------------------
{"level":"info","ts":1630388064.0889373,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"Generate uperf xml files"}
{"level":"info","ts":1630388065.0761564,"logger":"proxy","msg":"Cache miss: /v1, Kind=ConfigMap, benchmark-operator/uperf-test-6ba69223"}
{"level":"info","ts":1630388065.0792608,"logger":"proxy","msg":"Injecting owner reference"}
{"level":"info","ts":1630388065.0795078,"logger":"proxy","msg":"Watching child resource","kind":"/v1, Kind=ConfigMap","enqueue_kind":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark"}
{"level":"info","ts":1630388065.0795372,"logger":"controller-runtime.manager.controller.benchmark-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=ConfigMap"}
{"level":"info","ts":1630388065.1866348,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"uperf : Start Client(s) w/o serviceIP"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [uperf : Start Client(s) w/o serviceIP] ***********************************
task path: /opt/ansible/roles/uperf/tasks/start_client.yml:18

-------------------------------------------------------------------------------
{"level":"error","ts":1630388065.2695687,"logger":"logging_event_handler","msg":"","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"runner_on_failed","job":"1960528713598030433","EventData.Task":"Start Client(s) w/o serviceIP","EventData.TaskArgs":"","EventData.FailedTaskPath":"/opt/ansible/roles/uperf/tasks/start_client.yml:18","error":"[playbook task failed]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\ngitpro.ttaallkk.top/operator-framework/operator-sdk/internal/ansible/events.loggingEventHandler.Handle\n\t/workspace/internal/ansible/events/log_events.go:110"}

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Start Client(s) w/o serviceIP] ********************************
fatal: [localhost]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'url'\n\nThe error appears to be in '/opt/ansible/roles/uperf/tasks/start_client.yml': line 18, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  ### <POD> kind\n  - name: Start Client(s) w/o serviceIP\n    ^ here\n"
}

-------------------------------------------------------------------------------
{"level":"info","ts":1630388065.2727513,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"include_role : benchmark_state"}

--------------------------- Ansible Task StdOut -------------------------------

TASK [include_role : benchmark_state] ******************************************
task path: /opt/ansible/playbooks/benchmark.yml:68

-------------------------------------------------------------------------------

--------------------------- Ansible Task StdOut -------------------------------

TASK [benchmark_state : Failure State] *****************************************
task path: /opt/ansible/roles/benchmark_state/tasks/failure.yml:2

-------------------------------------------------------------------------------
{"level":"info","ts":1630388065.3033872,"logger":"logging_event_handler","msg":"[playbook task start]","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"playbook_on_task_start","job":"1960528713598030433","EventData.Name":"benchmark_state : Failure State"}
{"level":"error","ts":1630388065.3339794,"logger":"logging_event_handler","msg":"","name":"uperf","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"runner_on_failed","job":"1960528713598030433","EventData.Task":"Failure State","EventData.TaskArgs":"","EventData.FailedTaskPath":"/opt/ansible/roles/benchmark_state/tasks/failure.yml:2","error":"[playbook task failed]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\ngitpro.ttaallkk.top/operator-framework/operator-sdk/internal/ansible/events.loggingEventHandler.Handle\n\t/workspace/internal/ansible/events/log_events.go:110"}

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Failure State] ********************************
fatal: [localhost]: FAILED! => {
    "msg": "An unhandled exception occurred while templating '{'args': {'definition': \"{{ lookup('template', 'workload.yml.j2') | from_yaml }}\"}, 'action': 'k8s', 'async_val': 0, 'async': 0, 'changed_when': [], 'delay': 5, 'delegate_to': None, 'delegate_facts': None, 'failed_when': [], 'loop': None, 'loop_control': None, 'notify': None, 'poll': 15, 'register': None, 'retries': 3, 'until': [], 'loop_with': None, 'name': 'Start Client(s) w/o serviceIP', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'resource_item': '{{ server_pods.resources }}'}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'workload_args.serviceip|default(False) == False and server_pods.resources|length > 0'], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-00000000014b', 'finalized': False, 'squashed': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"'], 'collections': [], 'tags': [], 'dep_chain': [uperf], 'eor': False, 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70dc0>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70df0>}, '_dependencies': [], '_parents': []}, 'parent': {'static': None, 'args': {'_raw_params': 'start_client.yml'}, 'action': 'include_tasks', 'async_val': 0, 'async': 0, 'changed_when': [], 'delay': 5, 'delegate_to': None, 'delegate_facts': None, 'failed_when': [], 'loop': None, 'loop_control': None, 'notify': None, 'poll': 15, 'register': None, 'retries': 3, 'until': [], 'loop_with': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'benchmark_state.resources[0].status.state == \"Starting Clients\"'], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-00000000008a', 'finalized': False, 'squashed': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"'], 'collections': [], 'tags': [], 'dep_chain': [uperf], 'eor': False, 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70dc0>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70df0>}, '_dependencies': [], '_parents': []}, 'parent': {'allow_duplicates': True, 'public': False, 'static': None, 'args': {'name': '{{ workload.name }}'}, 'action': 'include_role', 'async_val': 0, 'async': 0, 'changed_when': [], 'delay': 5, 'delegate_to': None, 'delegate_facts': None, 'failed_when': [], 'loop': None, 'loop_control': None, 'notify': None, 'poll': 15, 'register': None, 'retries': 3, 'until': [], 'loop_with': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool'], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-00000000001a', 'finalized': False, 'squashed': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool'], 'collections': ['operator_sdk.util', 'ansible.legacy'], 'tags': [], 'dep_chain': None, 'eor': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': 'Run Workload', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")'], 'collections': ['operator_sdk.util', 'ansible.legacy'], 'tags': [], 'dep_chain': None, 'eor': False}, 'parent_type': 'Block'}, 'parent_type': 'Block'}, 'parent_type': 'IncludeRole'}, 'parent_type': 'Block', 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70dc0>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70df0>}, '_dependencies': [], '_parents': []}}, 'parent_type': 'TaskInclude'}, 'parent_type': 'Block', 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a83-002d-371e-c158-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70dc0>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7f93fce70df0>}, '_dependencies': [], '_parents': []}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: the template file workload.yml.j2 could not be found for the lookup"
}

-------------------------------------------------------------------------------

@HughNhan
Copy link
Collaborator

@MuhammadMunir12 - your CR is not right. For now if you do not have ES just remove both "elasticsearch" and "url".

TASK [Start Client(s) w/o serviceIP] ********************************
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'url'\n\nThe error appears to be in '/opt/ansible/roles/uperf/tasks/start_client.yml': line 18, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n ### kind\n - name: Start Client(s) w/o serviceIP\n ^ here\n"
}

@MuhammadMunir12
Copy link
Author

@HughNhan I am not using any elasticsearch and url parameter in my CR. It's the same CR1 as mentioned above in case.

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf
  namespace: benchmark-operator
spec:
  workload:
    name: uperf
    args:
      hostnetwork: false
      serviceip: false
      pin: true
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      samples: 1
      pair: 1
      test_types:
        - rr
      protos:
        - tcp
      sizes:
        - 1024
      nthrs:
        - 1
      runtime: 60

After removing the hostnetwork and service ip arguments.

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf
  namespace: benchmark-operator # my-ripsaw
spec:
  workload:
    name: uperf
    args:
            #hostnetwork: false
            #serviceip: false
      pin: true
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      samples: 1
      pair: 1
      test_types:
        - rr
      protos:
        - tcp
      sizes:
        - 1024
      nthrs:
        - 1
      runtime: 60

The error message appears to be same:

 TASK [Start Client(s) w/o serviceIP] ********************************
fatal: [localhost]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'url'\n\nThe error appears to be in '/opt/ansible/roles/uperf/tasks/start_client.yml': line 18, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  ### <POD> kind\n  - name: Start Client(s) w/o serviceIP\n    ^ here\n"
}

@HughNhan
Copy link
Collaborator

HughNhan commented Aug 31, 2021

@MuhammadMunir12 - Let's try this. Add a bogus elasticserach URL into your CR,
elasticsearch:
url: www.example.com
(Make sure the indentation is right)
This will allow the client to come up and UPERF will run. Since this ES is not real, there will be error at the end of the run when UPERF tries to index the the results to ES, but you may not see that. I will track down to see where the regression that defined "elasticsearch" variable even the CR does not have it.

@MuhammadMunir12
Copy link
Author

@HughNhan With the bogus value for ES, it is still failing on same ansible task for starting client pod.
Updated CR is given below:

$ cat uperf.yaml
apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: uperf
  namespace: benchmark-operator # my-ripsaw
spec:
  elasticsearch:
    url: "www.example.com"
  workload:
    name: uperf
    args:
            #hostnetwork: false
            #serviceip: false
      pin: true
      pin_server: "r192bc1.oss.labs"
      pin_client: "r192bc1.oss.labs"
      samples: 1
      pair: 1
      test_types:
        - rr
      protos:
        - tcp
      sizes:
        - 1024
      nthrs:
        - 1
      runtime: 60

@MuhammadMunir12
Copy link
Author

@HughNhan I have been able to run it. Both client and server pods are running.
But I'm unable to see the stats for traffic being sent or received from the logs of the server pod and client pod, just like iperf3 where we get information about traffic from client pod logs. The logs for the client are:

$ oc logs uperf-client-10.128.3.242-9fc9031a-szj6b
UPERF-run-context num_node= 1 density= 1 my_node_idx= 0 my_pod_idx= 0
<?xml version=1.0?>
<profile name="rr-tcp-1024-1024-1">
      <group nthreads="1">
      <transaction iterations="1">
        <flowop type="connect" options="remotehost=$h protocol=tcp"/>
      </transaction>
      <transaction duration="60">
        <flowop type=write options="size=1024"/>
        <flowop type=read  options="size=1024"/>
      <transaction iterations="1">
        <flowop type=disconnect />
      </transaction>
  </group>
            </profile>
2021-09-01T07:35:41Z - INFO     - MainProcess - run_snafu: logging level is INFO
2021-09-01T07:35:41Z - INFO     - MainProcess - _load_benchmarks: Successfully imported 1 benchmark modules: uperf
2021-09-01T07:35:41Z - INFO     - MainProcess - _load_benchmarks: Failed to import 0 benchmark modules:
2021-09-01T07:35:41Z - INFO     - MainProcess - run_snafu: Using elasticsearch server with host: www.example.com
2021-09-01T07:35:41Z - INFO     - MainProcess - run_snafu: Using index prefix for ES: ripsaw-uperf
2021-09-01T07:35:41Z - INFO     - MainProcess - run_snafu: Connected to the elasticsearch cluster with info as follows:
2021-09-01T07:36:21Z - WARNING  - MainProcess - run_snafu: Elasticsearch connection caused an exception: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f1a85188ba8>: Failed to establish a new connection: [Errno 101] Network is unreachable) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f1a85188ba8>: Failed to establish a new connection: [Errno 101] Network is unreachable)
2021-09-01T07:36:21Z - INFO     - MainProcess - run_snafu: Not connected to Elasticsearch
2021-09-01T07:36:21Z - INFO     - MainProcess - wrapper_factory: identified uperf as the benchmark wrapper
2021-09-01T07:36:21Z - INFO     - MainProcess - _benchmark: Starting uperf wrapper.
2021-09-01T07:36:21Z - INFO     - MainProcess - _benchmark: Running setup tasks.
2021-09-01T07:36:21Z - INFO     - MainProcess - _benchmark: Collecting results from benchmark.
2021-09-01T07:36:21Z - INFO     - MainProcess - uperf: Collecting 1 sample of Uperf
2021-09-01T07:36:21Z - INFO     - MainProcess - process: Collecting 1 sample of command ['uperf', '-v', '-a', '-R', '-i', '1', '-m', '/tmp/uperf-test/uperf-rr-tcp-1024-1024-1']
2021-09-01T07:37:23Z - INFO     - MainProcess - uperf: Finished collecting sample 0
2021-09-01T07:37:23Z - WARNING  - MainProcess - uperf: The following params will be overwritten due to values found in workload profile name: test_type, protocol, num_threads
2021-09-01T07:37:23Z - INFO     - MainProcess - process: Finished collecting 1 sample for command ['uperf', '-v', '-a', '-R', '-i', '1', '-m', '/tmp/uperf-test/uperf-rr-tcp-1024-1024-1']
2021-09-01T07:37:23Z - INFO     - MainProcess - uperf: Successfully collected 1 sample of Uperf.
2021-09-01T07:37:23Z - INFO     - MainProcess - _benchmark: Cleaning up
2021-09-01T07:37:23Z - INFO     - MainProcess - run_snafu: Duration of execution - 0:01:02, with total size of 14640 bytes

Could you please guide me on how to gather traffic information and if it needs an ES setup with Grafana, how I can I do that to see how much traffic is being sent and what's throughput at that time.
Your guidance will be highly appreciated. Thanks

@HughNhan
Copy link
Collaborator

HughNhan commented Sep 1, 2021

@MuhammadMunir12 - After you pick up this merge cloud-bulldozer/benchmark-wrapper#335, you should see UPERF stats in the client logs.

@MuhammadMunir12
Copy link
Author

@HughNhan So, here what I'll do is:

  1. clone the wrapper repo (https://github.com/cloud-bulldozer/benchmark-wrapper.git)
  2. Run "make deploy" (If yes, then it's not working here).
  3. Run CR for Uperf

Also, kindly tell what's the main difference between these two repos cloud-bulldozer/benchmark-operator and cloud-bulldozer/benchmark-wrapper ?

@HughNhan
Copy link
Collaborator

HughNhan commented Sep 2, 2021

@MuhammadMunir12 - Correction, the stats PR is cloud-bulldozer/benchmark-wrapper#332. Wrapper builds the container images. Operator creates env and pods. You can wait for a new uperf container image to be available after the PR merge. OR, you can use this private (no support) uperf image for now by adding.

  workload:
        args:
            image: quay.io/hnhan/benchmark-operator:uperf-snafu  <====

@HughNhan
Copy link
Collaborator

@MuhammadMunir12 - if you have no more issue. please close this issue.

@Sispheor
Copy link

Sispheor commented Dec 8, 2021

same here.
I've installed the operator via a subscription

oc get subscription ripsaw -n openshift-operators -o yaml

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  creationTimestamp: "2021-12-08T16:51:40Z"
  generation: 1
  labels:
    operators.coreos.com/ripsaw.openshift-operators: ""
  name: ripsaw
  namespace: openshift-operators
  resourceVersion: "3208888"
  uid: 0aac4d07-6341-425d-a295-e88cf2df04e4
spec:
  channel: alpha
  installPlanApproval: Automatic
  name: ripsaw
  source: community-operators
  sourceNamespace: openshift-marketplace
  startingCSV: ripsaw.v0.1.0
status:
  catalogHealth:
  - catalogSourceRef:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      name: certified-operators
      namespace: openshift-marketplace
      resourceVersion: "3183844"
      uid: 61ec3087-26e7-4714-b689-9b48b46b9d41
    healthy: true
    lastUpdated: "2021-12-08T16:51:40Z"
  - catalogSourceRef:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      name: community-operators
      namespace: openshift-marketplace
      resourceVersion: "3185016"
      uid: b5a6c969-7477-47be-ae4a-eee3a0786e1e
    healthy: true
    lastUpdated: "2021-12-08T16:51:40Z"
  - catalogSourceRef:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      name: redhat-marketplace
      namespace: openshift-marketplace
      resourceVersion: "3182400"
      uid: 92d562de-70e6-4af4-940e-df50465edef1
    healthy: true
    lastUpdated: "2021-12-08T16:51:40Z"
  - catalogSourceRef:
      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      name: redhat-operators
      namespace: openshift-marketplace
      resourceVersion: "3183842"
      uid: eed704a8-7835-494b-8512-559cda9dd421
    healthy: true
    lastUpdated: "2021-12-08T16:51:40Z"
  conditions:
  - lastTransitionTime: "2021-12-08T16:51:40Z"
    message: all available catalogsources are healthy
    reason: AllCatalogSourcesHealthy
    status: "False"
    type: CatalogSourcesUnhealthy
  - status: "False"
    type: ResolutionFailed
  currentCSV: ripsaw.v0.1.0
  installPlanGeneration: 1
  installPlanRef:
    apiVersion: operators.coreos.com/v1alpha1
    kind: InstallPlan
    name: install-wmsjh
    namespace: openshift-operators
    resourceVersion: "3187971"
    uid: b71fcd60-2d42-4e5f-9200-4d6d2fe40b74
  installedCSV: ripsaw.v0.1.0
  installplan:
    apiVersion: operators.coreos.com/v1alpha1
    kind: InstallPlan
    name: install-wmsjh
    uuid: b71fcd60-2d42-4e5f-9200-4d6d2fe40b74
  lastUpdated: "2021-12-08T17:16:26Z"
  state: AtLatestKnown

Then created a Benchmark (default config taken from the doc in git version 0.1 https://github.com/cloud-bulldozer/benchmark-operator/blob/5757262604727addea352c7142726aac53840a91/docs/uperf.md)

- name: Deploy Uperf benchmark
  kubernetes.core.k8s:
    kubeconfig: "{{ ocp_ignition_file_path }}/auth/kubeconfig"
    state: present
    definition:
      apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
      kind: Benchmark
      metadata:
        name: uperf-benchmark
        namespace: benchmark-operator
      spec:
        workload:
          name: uperf
          args:
            client_resources:
              requests:
                cpu: 500m
                memory: 500Mi
              limits:
                cpu: 500m
                memory: 500Mi
            server_resources:
              requests:
                cpu: 500m
                memory: 500Mi
              limits:
                cpu: 500m
                memory: 500Mi
            serviceip: false
            runtime_class: class_name
            hostnetwork: false
            networkpolicy: false
            pin: false
            kind: pod
            pin_server: "{{ two_first_worker_nodes[0] }}"
            pin_client: "{{ two_first_worker_nodes[1] }}"
            pair: 1
            multus:
              enabled: false
            samples: 1
            test_types:
              - stream
            protos:
              - tcp
            sizes:
              - 16384
            nthrs:
              - 1
            runtime: 30
            colocate: false
            density_range: [low, high]
            node_range: [low, high]
            step_size: addN, log2

Logs:

oc get Benchmark uperf-benchmark -o yaml

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  creationTimestamp: "2021-12-08T17:19:15Z"
  generation: 1
  name: uperf-benchmark
  namespace: benchmark-operator
  resourceVersion: "3211539"
  uid: 2c817576-ed25-487d-8fdb-eb2844262130
spec:
  metadata:
    collection: false
    force: false
    image: quay.io/cloud-bulldozer/backpack:latest
    privileged: false
    serviceaccount: default
    ssl: false
    stockpileSkipTags: []
    stockpileTags:
    - common
    - k8s
    - openshift
    targeted: true
  workload:
    args:
      client_resources:
        limits:
          cpu: 500m
          memory: 500Mi
        requests:
          cpu: 500m
          memory: 500Mi
      colocate: false
      density_range:
      - low
      - high
      hostnetwork: false
      kind: pod
      multus:
        enabled: false
      networkpolicy: false
      node_range:
      - low
      - high
      nthrs:
      - 1
      pair: 1
      pin: false
      pin_client: ocp4-03-worker-02.ocp4-03.gre.cloud4lab.local
      pin_server: ocp4-03-worker-01.ocp4-03.gre.cloud4lab.local
      protos:
      - tcp
      runtime: 30
      runtime_class: class_name
      samples: 1
      server_resources:
        limits:
          cpu: 500m
          memory: 500Mi
        requests:
          cpu: 500m
          memory: 500Mi
      serviceip: false
      sizes:
      - 16384
      step_size: addN, log2
      test_types:
      - stream
    name: uperf
status:
  complete: true
  message: "Reconcile failed on Start Server(s) - total = eligible nodes * density
    k8s \n ```{\n    \"_ansible_no_log\": false,\n    \"_ansible_parsed\": false,\n
    \   \"changed\": false,\n    \"exception\": \"Traceback (most recent call last):\\n
    \ File \\\"/opt/ansible/.ansible/tmp/ansible-tmp-1638983964.7026243-9703-254461825922483/AnsiballZ_k8s.py\\\",
    line 102, in <module>\\n    _ansiballz_main()\\n  File \\\"/opt/ansible/.ansible/tmp/ansible-tmp-1638983964.7026243-9703-254461825922483/AnsiballZ_k8s.py\\\",
    line 94, in _ansiballz_main\\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\\n
    \ File \\\"/opt/ansible/.ansible/tmp/ansible-tmp-1638983964.7026243-9703-254461825922483/AnsiballZ_k8s.py\\\",
    line 40, in invoke_module\\n    runpy.run_module(mod_name='ansible.modules.clustering.k8s.k8s',
    init_globals=None, run_name='__main__', alter_sys=True)\\n  File \\\"/usr/lib64/python3.8/runpy.py\\\",
    line 207, in run_module\\n    return _run_module_code(code, init_globals, run_name,
    mod_spec)\\n  File \\\"/usr/lib64/python3.8/runpy.py\\\", line 97, in _run_module_code\\n
    \   _run_code(code, mod_globals, init_globals,\\n  File \\\"/usr/lib64/python3.8/runpy.py\\\",
    line 87, in _run_code\\n    exec(code, run_globals)\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py\\\",
    line 281, in <module>\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py\\\",
    line 277, in main\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\\\",
    line 179, in execute_module\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\\\",
    line 163, in flatten_list_kind\\nTypeError: 'NoneType' object is not iterable\\n\",\n
    \   \"failed\": true,\n    \"module_stderr\": \"Traceback (most recent call last):\\n
    \ File \\\"/opt/ansible/.ansible/tmp/ansible-tmp-1638983964.7026243-9703-254461825922483/AnsiballZ_k8s.py\\\",
    line 102, in <module>\\n    _ansiballz_main()\\n  File \\\"/opt/ansible/.ansible/tmp/ansible-tmp-1638983964.7026243-9703-254461825922483/AnsiballZ_k8s.py\\\",
    line 94, in _ansiballz_main\\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\\n
    \ File \\\"/opt/ansible/.ansible/tmp/ansible-tmp-1638983964.7026243-9703-254461825922483/AnsiballZ_k8s.py\\\",
    line 40, in invoke_module\\n    runpy.run_module(mod_name='ansible.modules.clustering.k8s.k8s',
    init_globals=None, run_name='__main__', alter_sys=True)\\n  File \\\"/usr/lib64/python3.8/runpy.py\\\",
    line 207, in run_module\\n    return _run_module_code(code, init_globals, run_name,
    mod_spec)\\n  File \\\"/usr/lib64/python3.8/runpy.py\\\", line 97, in _run_module_code\\n
    \   _run_code(code, mod_globals, init_globals,\\n  File \\\"/usr/lib64/python3.8/runpy.py\\\",
    line 87, in _run_code\\n    exec(code, run_globals)\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py\\\",
    line 281, in <module>\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/modules/clustering/k8s/k8s.py\\\",
    line 277, in main\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\\\",
    line 179, in execute_module\\n  File \\\"/tmp/ansible_k8s_payload_1twytx6f/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\\\",
    line 163, in flatten_list_kind\\nTypeError: 'NoneType' object is not iterable\\n\",\n
    \   \"module_stdout\": \"\",\n    \"msg\": \"MODULE FAILURE\\nSee stdout/stderr
    for the exact error\",\n    \"rc\": 1\n}```"
  metadata: not collected
  node_hi_idx: "-1"
  node_idx: "-1"
  node_low_idx: "-1"
  pod_hi_idx: "-1"
  pod_idx: "-1"
  pod_low_idx: "-1"
  state: Failed
  suuid: 7bfaa073
  system_metrics: Not collected
  uuid: 7bfaa073-dde5-5891-b166-44d92ab1089a

@HughNhan
Copy link
Collaborator

HughNhan commented Dec 8, 2021

@Sispheor For a start, you may want to baby-step with using the simpler mode, the "pin=true" to get some familiarity. In your failed run, you have "pin=false" hence you activated "scale" mode in which all: colocate, node_range, density_range and step_size must be valid. However, your CR has "low, high, addN, logs" which are textual for documentation and not valid params. Please read the scale description section when you start using the scale mode.

@Sispheor
Copy link

Sispheor commented Dec 8, 2021

Thanks for your answer. Actually I took the default doc config to have a simple mode.

I updated pin flag to true. And removed the last flags that provide scalability feature.

The error is not the same now.

spec:
  metadata:
    collection: false
    force: false
    image: quay.io/cloud-bulldozer/backpack:latest
    privileged: false
    serviceaccount: default
    ssl: false
    stockpileSkipTags: []
    stockpileTags:
    - common
    - k8s
    - openshift
    targeted: true
  workload:
    args:
      client_resources:
        limits:
          cpu: 500m
          memory: 500Mi
        requests:
          cpu: 500m
          memory: 500Mi
      hostnetwork: false
      kind: pod
      multus:
        enabled: false
      networkpolicy: false
      nthrs:
      - 1
      pair: 1
      pin: true
      pin_client: ocp4-03-worker-02.ocp4-03.gre.cloud4lab.local
      pin_server: ocp4-03-worker-01.ocp4-03.gre.cloud4lab.local
      protos:
      - tcp
      runtime: 30
      runtime_class: class_name
      samples: 1
      server_resources:
        limits:
          cpu: 500m
          memory: 500Mi
        requests:
          cpu: 500m
          memory: 500Mi
      serviceip: false
      sizes:
      - 16384
      test_types:
      - stream
    name: uperf
status:
  complete: true
  message: "Reconcile failed on Start Server(s) - total = eligible nodes * density
    k8s \n ```{\n    \"_ansible_no_log\": false,\n    \"_ansible_parsed\": true,\n
    \   \"changed\": false,\n    \"error\": 422,\n    \"exception\": \"  File \\\"/tmp/ansible_k8s_payload_gczwsn03/ansible_k8s_payload.zip/ansible/module_utils/k8s/raw.py\\\",
    line 326, in perform_action\\n    k8s_obj = resource.create(definition, namespace=namespace).to_dict()\\n
    \ File \\\"/usr/local/lib/python3.8/site-packages/openshift/dynamic/client.py\\\",
    line 101, in create\\n    return self.request('post', path, body=body, **kwargs)\\n
    \ File \\\"/usr/local/lib/python3.8/site-packages/openshift/dynamic/client.py\\\",
    line 44, in inner\\n    raise api_exception(e)\\n\",\n    \"failed\": true,\n
    \   \"invocation\": {\n        \"module_args\": {\n            \"api_key\": null,\n
    \           \"api_version\": \"v1\",\n            \"append_hash\": false,\n            \"apply\":
    false,\n            \"ca_cert\": null,\n            \"client_cert\": null,\n            \"client_key\":
    null,\n            \"context\": null,\n            \"force\": false,\n            \"host\":
    null,\n            \"kind\": null,\n            \"kubeconfig\": null,\n            \"merge_type\":
    null,\n            \"name\": null,\n            \"namespace\": null,\n            \"password\":
    null,\n            \"proxy\": null,\n            \"resource_definition\": {\n
    \               \"apiVersion\": \"v1\",\n                \"items\": [\n                    {\n
    \                       \"apiVersion\": \"batch/v1\",\n                        \"kind\":
    \"Job\",\n                        \"metadata\": {\n                            \"name\":
    \"uperf-server-ocp4-03-worker-01.ocp4-03.g-0-31cd7aed\",\n                            \"namespace\":
    \"benchmark-operator\"\n                        },\n                        \"spec\":
    {\n                            \"backoffLimit\": 0,\n                            \"template\":
    {\n                                \"metadata\": {\n                                    \"annotations\":
    {\n                                        \"node_idx\": \"0\",\n                                        \"pod_idx\":
    \"0\"\n                                    },\n                                    \"labels\":
    {\n                                        \"app\": \"uperf-bench-server-ocp4-03-worker-01.ocp4-03.g-0-31cd7aed\",\n
    \                                       \"benchmark-operator-role\": \"server\",\n
    \                                       \"benchmark-operator-workload\": \"uperf\",\n
    \                                       \"benchmark-uuid\": \"31cd7aed-dd89-58ac-a25b-a21c91fa5ec7\",\n
    \                                       \"type\": \"uperf-benchmark-bench-server-31cd7aed\"\n
    \                                   }\n                                },\n                                \"spec\":
    {\n                                    \"containers\": [\n                                        {\n
    \                                           \"args\": [\n                                                \"uperf
    -s -v -P 20000\"\n                                            ],\n                                            \"command\":
    [\n                                                \"/bin/sh\",\n                                                \"-c\"\n
    \                                           ],\n                                            \"image\":
    \"quay.io/cloud-bulldozer/uperf:latest\",\n                                            \"imagePullPolicy\":
    \"Always\",\n                                            \"name\": \"benchmark\",\n
    \                                           \"resources\": {\n                                                \"limits\":
    {\n                                                    \"cpu\": \"500m\",\n                                                    \"memory\":
    \"500Mi\"\n                                                },\n                                                \"requests\":
    {\n                                                    \"cpu\": \"500m\",\n                                                    \"memory\":
    \"500Mi\"\n                                                }\n                                            }\n
    \                                       }\n                                    ],\n
    \                                   \"nodeSelector\": {\n                                        \"kubernetes.io/hostname\":
    \"ocp4-03-worker-01.ocp4-03.gre.cloud4lab.local\"\n                                    },\n
    \                                   \"restartPolicy\": \"OnFailure\",\n                                    \"runtimeClassName\":
    \"class_name\"\n                                }\n                            },\n
    \                           \"ttlSecondsAfterFinished\": 600\n                        }\n
    \                   }\n                ],\n                \"kind\": \"List\",\n
    \               \"metadata\": {}\n            },\n            \"src\": null,\n
    \           \"state\": \"present\",\n            \"username\": null,\n            \"validate\":
    null,\n            \"validate_certs\": null,\n            \"wait\": false,\n            \"wait_condition\":
    null,\n            \"wait_sleep\": 5,\n            \"wait_timeout\": 120\n        }\n
    \   },\n    \"msg\": \"Failed to create object: b'{\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Job.batch
    \\\\\\\\\\\"uperf-server-ocp4-03-worker-01.ocp4-03.g-0-31cd7aed\\\\\\\\\\\" is
    invalid: spec.template.spec.runtimeClassName: Invalid value: \\\\\\\\\\\"class_name\\\\\\\\\\\":
    a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters,
    \\\\'-\\\\' or \\\\'.\\\\', and must start and end with an alphanumeric character
    (e.g. \\\\'example.com\\\\', regex used for validation is \\\\'[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\\\\\\\\\\\\\\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*\\\\')\\\",\\\"reason\\\":\\\"Invalid\\\",\\\"details\\\":{\\\"name\\\":\\\"uperf-server-ocp4-03-worker-01.ocp4-03.g-0-31cd7aed\\\",\\\"group\\\":\\\"batch\\\",\\\"kind\\\":\\\"Job\\\",\\\"causes\\\":[{\\\"reason\\\":\\\"FieldValueInvalid\\\",\\\"message\\\":\\\"Invalid
    value: \\\\\\\\\\\"class_name\\\\\\\\\\\": a lowercase RFC 1123 subdomain must
    consist of lower case alphanumeric characters, \\\\'-\\\\' or \\\\'.\\\\', and
    must start and end with an alphanumeric character (e.g. \\\\'example.com\\\\',
    regex used for validation is \\\\'[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\\\\\\\\\\\\\\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*\\\\')\\\",\\\"field\\\":\\\"spec.template.spec.runtimeClassName\\\"}]},\\\"code\\\":422}\\\\n'\",\n
    \   \"reason\": \"Unprocessable Entity\",\n    \"status\": 422\n}```"

@Sispheor
Copy link

Sispheor commented Dec 8, 2021

I removed the runtime_class flag as well.
Seems better.
I really encourage you guys to always provide a default basic and working examples out of the box 😁.

@Sispheor
Copy link

Sispheor commented Dec 8, 2021

Same as @MuhammadMunir12

 TASK [Start Client(s) w/o serviceIP] ******************************** 
fatal: [localhost]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'elasticsearch' is undefined\n\nThe error appears to be in '/opt/ansible/roles/uperf/tasks/start_client.yml': line 18, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  ### <POD> kind\n  - name: Start Client(s) w/o serviceIP\n    ^ here\n"
}

And

 TASK [Failure State] ******************************** 
{"level":"error","ts":1638987352.7791166,"logger":"logging_event_handler","msg":"","name":"uperf-benchmark","namespace":"benchmark-operator","gvk":"ripsaw.cloudbulldozer.io/v1alpha1, Kind=Benchmark","event_type":"runner_on_failed","job":"1901631351628571046","EventData.Task":"Failure State","EventData.TaskArgs":"","EventData.FailedTaskPath":"/opt/ansible/roles/benchmark_state/tasks/failure.yml:2","error":"[playbook task failed]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.2.0/zapr.go:132\ngitpro.ttaallkk.top/operator-framework/operator-sdk/internal/ansible/events.loggingEventHandler.Handle\n\t/workspace/internal/ansible/events/log_events.go:110"}
fatal: [localhost]: FAILED! => {
    "msg": "An unhandled exception occurred while templating '{'args': {'definition': \"{{ lookup('template', 'workload.yml.j2') | from_yaml }}\"}, 'action': 'k8s', 'async_val': 0, 'async': 0, 'changed_when': [], 'delay': 5, 'delegate_to': None, 'delegate_facts': None, 'failed_when': [], 'loop': None, 'loop_control': None, 'notify': None, 'poll': 15, 'register': None, 'retries': 3, 'until': [], 'loop_with': None, 'name': 'Start Client(s) w/o serviceIP', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'resource_item': '{{ server_pods.resources }}'}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'workload_args.serviceip|default(False) == False and server_pods.resources|length > 0'], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-00000000014b', 'finalized': False, 'squashed': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"'], 'collections': [], 'tags': [], 'dep_chain': [uperf], 'eor': False, 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd00>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd30>}, '_dependencies': [], '_parents': []}, 'parent': {'static': None, 'args': {'_raw_params': 'start_client.yml'}, 'action': 'include_tasks', 'async_val': 0, 'async': 0, 'changed_when': [], 'delay': 5, 'delegate_to': None, 'delegate_facts': None, 'failed_when': [], 'loop': None, 'loop_control': None, 'notify': None, 'poll': 15, 'register': None, 'retries': 3, 'until': [], 'loop_with': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"', 'benchmark_state.resources[0].status.state == \"Starting Clients\"'], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-00000000008a', 'finalized': False, 'squashed': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'resource_kind == \"pod\"'], 'collections': [], 'tags': [], 'dep_chain': [uperf], 'eor': False, 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd00>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd30>}, '_dependencies': [], '_parents': []}, 'parent': {'allow_duplicates': True, 'public': False, 'static': None, 'args': {'name': '{{ workload.name }}'}, 'action': 'include_role', 'async_val': 0, 'async': 0, 'changed_when': [], 'delay': 5, 'delegate_to': None, 'delegate_facts': None, 'failed_when': [], 'loop': None, 'loop_control': None, 'notify': None, 'poll': 15, 'register': None, 'retries': 3, 'until': [], 'loop_with': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool', 'benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool'], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-00000000001a', 'finalized': False, 'squashed': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': [], 'environment': [], 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")', 'metadata is not defined or not metadata.collection | default(\\'false\\') | bool or (benchmark_state.resources[0].status.metadata is defined and benchmark_state.resources[0].status.metadata == \"Complete\") or metadata.targeted | default(\\'true\\') | bool'], 'collections': ['operator_sdk.util', 'ansible.legacy'], 'tags': [], 'dep_chain': None, 'eor': False, 'parent': {'delegate_to': None, 'delegate_facts': None, 'name': 'Run Workload', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': ['benchmark_state is defined and benchmark_state.resources[0].status is defined and not benchmark_state.resources[0].status.complete|bool and (benchmark_state.resources[0].status.state is not defined or benchmark_state.resources[0].status.state != \"Error\")'], 'collections': ['operator_sdk.util', 'ansible.legacy'], 'tags': [], 'dep_chain': None, 'eor': False}, 'parent_type': 'Block'}, 'parent_type': 'Block'}, 'parent_type': 'IncludeRole'}, 'parent_type': 'Block', 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd00>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd30>}, '_dependencies': [], '_parents': []}}, 'parent_type': 'TaskInclude'}, 'parent_type': 'Block', 'role': {'delegate_to': None, 'delegate_facts': None, 'name': '', 'connection': 'smart', 'port': None, 'remote_user': None, 'vars': {'workload_args': '{{ workload.args }}'}, 'module_defaults': None, 'environment': None, 'no_log': None, 'run_once': None, 'ignore_errors': None, 'ignore_unreachable': None, 'check_mode': False, 'diff': False, 'any_errors_fatal': False, 'throttle': 0, 'debugger': None, 'become': False, 'become_method': 'sudo', 'become_user': None, 'become_flags': None, 'become_exe': None, 'when': [], 'tags': [], 'collections': [], 'uuid': '0a580a86-040d-b4c7-999a-000000000083', 'finalized': False, 'squashed': False, '_role_name': 'uperf', '_role_path': '/opt/ansible/roles/uperf', '_role_vars': {'cleanup': True, 'worker_node_list': [], 'pod_low_idx': '0', 'pod_hi_idx': '0', 'node_low_idx': '0', 'node_hi_idx': '0', 'node_idx': '0', 'pod_idx': '0', 'all_run_done': False}, '_role_params': {}, '_default_vars': {'resource_kind': \"{{ workload.args.kind | default('pod') }}\", 'uperf': {'proto': 'tcp', 'test_type': 'stream', 'nthr': 1, 'size': 1024, 'runtime': 60}}, '_had_task_run': {'localhost': True}, '_completed': {}, '_metadata': {'allow_duplicates': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd00>, 'dependencies': <ansible.playbook.attribute.FieldAttribute object at 0x7fd607d4bd30>}, '_dependencies': [], '_parents': []}}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: the template file workload.yml.j2 could not be found for the lookup"
}


@HughNhan
Copy link
Collaborator

HughNhan commented Dec 8, 2021

@Sispheor FYI, I am out of the office for a couple days. I will not be able to help you until then. Hope you can figure it out yourself, or someone else will. With a few corrections in your CR, it will work.

@Sispheor
Copy link

Sispheor commented Dec 9, 2021

I've added Elasticsearch url. But still an issue

the template file workload.yml.j2 could not be found for the lookup

@jtaleric
Copy link
Member

I removed the runtime_class flag as well. Seems better. I really encourage you guys to always provide a default basic and working examples out of the box grin.

How are you installing the operator? I would recommend not using OperatorHub, but using the install method mentioned in github.

As quick example see : https://github.com/cloud-bulldozer/benchmark-operator/blob/master/tests/test_uperf.sh

That is what runs in our CI with each PR.

We also have some scripting to automate the entire install/run/cleanup process.

Also, if you have ideas on how to make the docs better, happy to see a PR 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants