Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow other location account #104

Merged
merged 7 commits into from
Jan 28, 2020

Conversation

bengland2
Copy link
Contributor

Snafu CI was tested on my laptop running F31 and minikube 1.6.1. I was able to use docker to run the CI (except for ycsb, not sure why yet), but was not able to use podman, but this may just be a problem on my laptop rather than in general. Also updated README.md to describe how CI works and how to test CI for your wrapper. Do not test this without ripsaw PR 260 (allows container image location and account to be overridden by user). Should address some of Snafu issue 90 (lack of user documentation).

let user run on any container image server and account
let user use either docker or podman
more common code shared by all wrappers (build_wrapper_image)
ensure ripsaw namespace goes away before each wrapper test
describe how to run them in README.md
Now I can actually reproduce CI failures on my laptop to fix them
log where image got pulled from so we know it got image
from location and account that we specify
@bengland2
Copy link
Contributor Author

Here's what happened when I ran it:

[bengland@bene-laptop-6 snafu]$ awk '/PASS/||/FAIL/' full_ci.log

  • echo 'cluster_loader | FAIL | 00:00:00'
  • echo 'fio_wrapper | PASS | 00:04:08'
  • echo 'fs_drift_wrapper | PASS | 00:03:59'
  • echo 'hammerdb | PASS | 00:08:54'
  • echo 'iperf | PASS | 00:04:04'
  • echo 'pgbench-wrapper | PASS | 00:15:16'
  • echo 'smallfile_wrapper | FAIL | 00:04:22'
  • echo 'sysbench | PASS | 00:03:46'
  • echo 'uperf-wrapper | PASS | 00:04:47'
  • echo 'ycsb-wrapper | PASS | 00:16:40'

I don't know why cluster loader fails, didn't touch it.
smallfile_wrapper fails correctly because of a known problem with PR 99 where it doesn't update ripsaw-smallfile-rsptimes index, I'm working on that but it's nothing to do with this PR.
I'm not sure why ycsb-wrapper passed, given all the errors I saw in the logs, but whatever.

@aakarshg aakarshg added the ok to test Kick off our CI framework label Dec 17, 2019
@aakarshg
Copy link
Contributor

/rerun all

@aakarshg aakarshg requested a review from dry923 December 17, 2019 09:33
@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
cluster_loader FAIL 00:00:00
fio_wrapper FAIL 00:14:15
fs_drift_wrapper PASS 00:06:48
hammerdb FAIL 00:10:32
iperf FAIL 00:06:58
pgbench-wrapper FAIL 00:11:17
smallfile_wrapper FAIL 00:07:20
sysbench FAIL 00:05:40
uperf-wrapper PASS 00:06:41
ycsb-wrapper FAIL 00:10:10

ci/common.sh Outdated
if [ "$image_builder" = "docker" ] ; then
docker build --tag=$image_spec -f $wrapper_dir/Dockerfile . && docker push $image_spec
elif [ "$image_builder" = "podman" ] ; then
buildah bud --tag $image_spec -f $wrapper_dir/Dockerfile . && podman push $image_spec
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is causing an issue in the CI env

+ buildah bud --tag quay.io/rht_perf_ci/fs-drift:snafu_ci -f fs_drift_wrapper/Dockerfile .
ci/common.sh: line 48: buildah: command not found

We already build the image with the operator-sdk build command on line 21. Why are we trying to build it again? It should just be a podman/docker push

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ripsaw image is built with operator-sdk, not the snafu images.

@aakarshg
Copy link
Contributor

aakarshg commented Jan 3, 2020

/rerun all

@bengland2
Copy link
Contributor Author

Looking at CI log I see that "buildah" is not installed on the CI system. is podman supported? If so, then we should be installing buildah package on the CI system. The intent of the commit was to allow either docker or crio-built containers to be used. That's what the image_builder var was for.

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
cluster_loader FAIL 00:00:00
fio_wrapper FAIL 00:14:15
fs_drift_wrapper PASS 00:06:49
hammerdb FAIL 00:05:25
iperf FAIL 00:06:54
pgbench-wrapper FAIL 00:11:25
smallfile_wrapper FAIL 00:07:18
sysbench FAIL 00:05:39
uperf-wrapper PASS 00:06:18
ycsb-wrapper FAIL 00:10:11

@aakarshg
Copy link
Contributor

aakarshg commented Jan 3, 2020

/rerun all

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
cluster_loader FAIL 00:00:00
fio_wrapper FAIL 00:16:29
fs_drift_wrapper PASS 00:09:07
hammerdb FAIL 00:07:34
iperf FAIL 00:08:22
pgbench-wrapper FAIL 00:11:20
smallfile_wrapper FAIL 00:09:47
sysbench FAIL 00:07:22
uperf-wrapper PASS 00:08:40
ycsb-wrapper FAIL 00:17:07

@bengland2
Copy link
Contributor Author

@aakarsh I think I figured something out about this - all podman+buildah commands must be run with sudo. I made changes for this and now it works. docker does not require this because docker runs as a service. Will commit a change for this in ci/common.sh and retry CI once I've run it on my laptop successfully.

@bengland2
Copy link
Contributor Author

want to get ripsaw PR 260 merged, then this will go better.

@aakarshg aakarshg requested a review from dry923 January 8, 2020 09:45
@aakarshg
Copy link
Contributor

aakarshg commented Jan 8, 2020

/rerun all

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper FAIL 00:17:25
fs_drift_wrapper PASS 00:10:39
hammerdb FAIL 00:08:45
iperf FAIL 00:09:22
pgbench-wrapper FAIL 00:11:48
smallfile_wrapper FAIL 00:10:44
sysbench FAIL 00:08:03
uperf-wrapper PASS 00:10:26
ycsb-wrapper FAIL 00:22:49

@aakarshg
Copy link
Contributor

aakarshg commented Jan 9, 2020

/rerun all

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper FAIL 00:18:04
fs_drift_wrapper PASS 00:09:56
hammerdb FAIL 00:08:17
iperf FAIL 00:09:17
pgbench-wrapper FAIL 00:11:43
smallfile_wrapper FAIL 00:10:44
sysbench FAIL 00:08:06
uperf-wrapper PASS 00:09:23
ycsb-wrapper FAIL 00:25:20

@bengland2
Copy link
Contributor Author

I just rebased to use Raul's new ci/common.sh build_and_push routine .

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper FAIL 00:15:06
fs_drift_wrapper PASS 00:09:10
hammerdb FAIL 00:08:49
iperf FAIL 00:07:29
pgbench-wrapper FAIL 00:12:19
smallfile_wrapper FAIL 00:08:14
sysbench FAIL 00:06:13
uperf-wrapper PASS 00:07:07
ycsb-wrapper FAIL 00:10:56

@aakarshg
Copy link
Contributor

CI is failing hard on this pr because the individual workload images aren't accessible ? I see following error:

trying and failing to pull image

So i checked the images in rht_perf_ci namespace and most of them were uploaded first time as we're using cloud-bulldozer namespace even for ci, and these images were private by default ( quay.io :/ ) I made them public so will hit rerun now and hopefully should've fixed it.

@aakarshg
Copy link
Contributor

/rerun all

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper PASS 00:06:50
fs_drift_wrapper PASS 00:07:23
hammerdb FAIL 00:08:57
iperf PASS 00:06:00
pgbench-wrapper PASS 00:07:11
smallfile_wrapper PASS 00:06:32
sysbench PASS 00:05:33
uperf-wrapper PASS 00:07:15
ycsb-wrapper PASS 00:05:52

Copy link
Contributor

@aakarshg aakarshg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. will merge once @dry923 approves as well

@dry923
Copy link
Member

dry923 commented Jan 13, 2020

LGTM, however why is the hammer ci job failing?

@bengland2
Copy link
Contributor Author

hammerdb won't run in my minikube. One issue for me was in ripsaw/roles/hammerdb/templates/db_workload.yml.j2 "image:" lines, where Marko Karg's image was the default one not cloud-bulldozer. This is exactly why I wrote this PR - So that we wouldn't have to edit the benchmark yamls to run it in our private minikube. @mkarg75 can you fix the image: in db_workload.yml.j2 ?

@dry923
Copy link
Member

dry923 commented Jan 13, 2020

@bengland2 ah ok makes sense.

@mkarg75 any reason we couldn't change the ripsaw role to point to hammerdb:master instead of latest to fall in line with the others as well as make the snafu ci a little more predictable?

@bengland2
Copy link
Contributor Author

@mkarg75 @dry923 when I run in my minikube, I get

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  <unknown>          default-scheduler  Successfully assigned my-ripsaw/hammerdb-workload-c036ef13-9xcxj to minikube
...
  Normal   Pulling    6s (x4 over 89s)   kubelet, minikube  Pulling image "quay.io/mkarg/hammerdb:snafu_ci"
  Warning  Failed     5s (x4 over 88s)   kubelet, minikube  Failed to pull image "quay.io/mkarg/hammerdb:snafu_ci": rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/mkarg/hammerdb:snafu_ci not found: manifest unknown: manifest unknown
  Warning  Failed     5s (x4 over 88s)   kubelet, minikube  Error: ErrImagePull

This is because it changes the tag to snafu_ci but leaves the account as mkarg, so there is no such thing in quay.io. This is why it doesn't run. The same thing should be happening in the CI server, because it's still trying to use quay.io/mkarg/hammerdb:snafu_ci, which does not exist.

Marko's PR should help - the remaining thing is to have my PR edit the YAML file to change

quay.io/cloud-bulldozer/hammerdb:latest ==> quay.io/rht_perf_ci/hammerdb:snafu_ci

I found this syntax error in tests/mssql.yaml:

+ kubectl delete -f tests/mssql.yaml
namespace "sql-server" deleted
service "mssql-deployment" deleted
error: unable to recognize "tests/mssql.yaml": no matches for kind "Deployment" in version "apps/v1beta1"

Also, I see that it never starts the sql-server pod to create the database

++ mssql_pod=
++ kubectl wait --for=condition=Ready pods/ --namespace sql-server --timeout=300s
error: arguments in resource/name form must have a single resource and name

Also, it appears that this does not generate an error when the snafu wrapper is run.

++ kubectl logs hammerdb-workload-c036ef13-9xcxj --namespace my-ripsaw
++ grep 'SEQUENCE COMPLETE'
Error from server (BadRequest): container "hammerdb" in pod "hammerdb-workload-c036ef13-9xcxj" is waiting to start: trying and failing to pull image
++ echo 'Hammerdb test: Success'
Hammerdb test: Success

@mkarg75
Copy link
Contributor

mkarg75 commented Jan 14, 2020

I'm looking into the issues Ben faced.

@mkarg75
Copy link
Contributor

mkarg75 commented Jan 14, 2020

Hm, I've tried the same steps as Ben:

x1:~/git/ripsaw/tests (master)$ oc create -f mssql.yaml 
namespace/sql-server created
deployment.apps/mssql-deployment created
service/mssql-deployment created
 
x1:~/git/ripsaw/tests (master)$ oc delete -f mssql.yaml 
namespace "sql-server" deleted
deployment.apps "mssql-deployment" deleted
service "mssql-deployment" deleted

So there is a namespace, a deployment and a service created and deleted for me. And the pod is also running for me:

x1:~/git/ripsaw/tests (master)$ oc get pods -n sql-server
NAME                                READY   STATUS    RESTARTS   AGE
mssql-deployment-76b45554d5-b86d6   1/1     Running   0          5s

As long as I'm pointing to my hammerdb image the db initialization and tests work fine too, so there's something missing in the quay.io/cloud-bulldozer/hammerdb:master image.

@aakarshg
Copy link
Contributor

Hm, I've tried the same steps as Ben:

x1:~/git/ripsaw/tests (master)$ oc create -f mssql.yaml 
namespace/sql-server created
deployment.apps/mssql-deployment created
service/mssql-deployment created
 
x1:~/git/ripsaw/tests (master)$ oc delete -f mssql.yaml 
namespace "sql-server" deleted
deployment.apps "mssql-deployment" deleted
service "mssql-deployment" deleted

So there is a namespace, a deployment and a service created and deleted for me. And the pod is also running for me:

x1:~/git/ripsaw/tests (master)$ oc get pods -n sql-server
NAME                                READY   STATUS    RESTARTS   AGE
mssql-deployment-76b45554d5-b86d6   1/1     Running   0          5s

As long as I'm pointing to my hammerdb image the db initialization and tests work fine too, so there's something missing in the quay.io/cloud-bulldozer/hammerdb:master image.

@mkarg75 can you ptal at the dockerfile and see what's misisng between your image and the one on cloud-bulldozer org?

@mkarg75
Copy link
Contributor

mkarg75 commented Jan 14, 2020

I just started a test run on my minishift instance:

x1:~/git/snafu/hammerdb (master)$ oc get pods
NAME                                  READY   STATUS    RESTARTS   AGE
benchmark-operator-6c578b9b6d-rqzdz   3/3     Running   0          6m
hammerdb-workload-c718b6c4-zcgvp      1/1     Running   0          1m

x1:~/git/snafu/hammerdb (master)$ oc describe pod hammerdb-workload-c718b6c4-zcgvp | grep image
  Normal  Pulled     100s  kubelet, localhost  Container image "quay.io/cloud-bulldozer/hammerdb:master" already present on machine

So that's the pod from the cloud-bulldozer repo.
Inside the pod everything looks fine to me:

x1:~/git/ripsaw (master)$ oc rsh hammerdb-workload-c718b6c4-zcgvp
sh-4.2$ ps afx
  PID TTY      STAT   TIME COMMAND
   20 ?        Ss     0:00 /bin/sh
   25 ?        R+     0:00  \_ ps afx
    1 ?        Ss     0:00 /bin/sh -c /usr/local/bin/uid_entrypoint; export es_server=marquez.perf.lab.eng.rdu2.redhat.com; export es_port=9200; export uuid=c718b6c4-b858-5895-ae4c-f1bdcb77d2
   10 ?        S      0:00 python /opt/snafu/hammerdb/hammerd_wrapper.py
   15 ?        S      0:00  \_ /bin/sh -c cd /hammer; ./hammerdbcli auto /workload/tpcc-workload.tcl
   16 ?        Sl     0:01      \_ ./bin/tclsh8.6 ./hammerdbcli auto /workload/tpcc-workload.tcl

sh-4.2$ tail -f /tmp/hammerdb.log 
Hammerdb Log @ Tue Jan 14 13:28:04 UTC 2020
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
Vuser 1:Beginning rampup time of 1 minutes
Vuser 2:Processing 20000 transactions with output suppressed...
Vuser 1:Rampup 1 minutes complete ...
Vuser 1:Rampup complete, Taking start Transaction Count.
Vuser 1:Timing test period of 30 in minutes
Vuser 1:1 ...,

The only thing that is probably different to Ben's setup is that I've commented the metadata_collection as that was causing issues.

@bengland2
Copy link
Contributor Author

@mkarg75, it's useful to see how you check that it's running right. I noticed you are using "oc" not "kubectl". Are you running openshift or minikube? how much disk space is needed for the benchmark to run? I'll try running it in my openshift cluster, see if it's any different.

@dry923 what version of minikube and kubectl are you using? Use "kubectl version" cmd. I'm using minikube v1.6.1 on Fedora 31, and my client version is 1.16 and my server version is 1.17.

@bengland2
Copy link
Contributor Author

@mkarg75 I don't use minishift anymore because of severe problems getting CI tests to pass with it. I use minikube instead. However, with openshift cluster, the mssql.yaml was accepted - perhaps kube 1.17 doesn't like it? I can downgrade my minikube to 1.16. However, there are other problems.

++ kubectl wait --for=condition=complete -l app=hammerdb_workload-5877cd5f --namespace my-ripsaw jobs --timeout=300s
error: timed out waiting for the condition on jobs/hammerdb-workload-5877cd5f
++ grep 'SEQUENCE COMPLETE'
++ kubectl logs hammerdb-workload-5877cd5f-mv5p5 --namespace my-ripsaw
++ echo 'Hammerdb test: Success'
Hammerdb test: Success

If we get a timeout waiting for the hammerdb pod to complete, then I don't think the test was a success:

The workload pod exited in an error status:

# oc describe pod/hammerdb-workload-35555ccb-8b2ds
...
containers:
  hammerdb:
    Container ID:  cri-o://68232679ee715bccb263e402e9e508dcd97179f94890b37ddee72cfe5379c4d7
    Image:         quay.io/bengland2/hammerdb:snafu_ci
    Image ID:      quay.io/bengland2/hammerdb@sha256:75a6877340efc3caf68905261ee3645e944e7f6a02057bf9d096a0b2801521b6
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
    Args:
      /usr/local/bin/uid_entrypoint; export es_server=snafu@ripsaw:ec2-54-201-210-248.us-west-2.compute.amazonaws.com; export es_port=9200; export uuid=35555ccb-0f08-5ac9-8b9c-f5ccdf93652a; export db_server=mssql-deployment.sql-server; export db_port=1433; export db_warehouses=1; export db_num_workers=1; export db_tcp=true; export db_user=SA; export transactions=20000; export test_type=tpc-c; export runtime=30; export rampup=1; export samples=1; export timed_test=True; cd /hammer; /opt/snafu/hammerdb/hammerd_wrapper.py
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 14 Jan 2020 10:24:11 -0500
      Finished:     Tue, 14 Jan 2020 10:24:19 -0500

The reason for this was:

[bengland@bene-laptop-6 ripsaw]$ oc logs hammerdb-workload-35555ccb-8b2ds
Traceback (most recent call last):
  File "/opt/snafu/hammerdb/hammerd_wrapper.py", line 180, in <module>
    sys.exit(main())
  File "/opt/snafu/hammerdb/hammerd_wrapper.py", line 174, in main
    raise Exception('failed to produce hammerdb results document')
Exception: failed to produce hammerdb results document

Which resulted from this patch:

diff --git a/hammerdb/hammerd_wrapper.py b/hammerdb/hammerd_wrapper.py
index 2896763..2afaa92 100755
--- a/hammerdb/hammerd_wrapper.py
+++ b/hammerdb/hammerd_wrapper.py
@@ -170,6 +170,8 @@ def main():
     if es_server != "" :
         if len(documents) > 0 :
             _index_result("ripsaw-hammerdb-results", es_server, es_port, documents)
+        else:
+            raise Exception('failed to produce hammerdb results document')

This patch would make it easier to diagnose such problems.

@mkarg75
Copy link
Contributor

mkarg75 commented Jan 14, 2020

These issues are all new to me, I'm inclined to say that they mostly originate from the move to stockpile.
I've been running my tests on openshift as well as on a minishift that's why I used oc.
Will update you once I know more.

@bengland2
Copy link
Contributor Author

am retrying with minikube-1.5.2-0.x86_64 which uses k8s 1.16 on my laptop with just hammerdb.

…ion_account

rebase in light of numerous merged PRs
@dry923
Copy link
Member

dry923 commented Jan 20, 2020

/rerun all

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper FAIL 00:00:36
fs_drift_wrapper FAIL 00:01:46
hammerdb FAIL 00:03:23
iperf FAIL 00:00:25
pgbench-wrapper FAIL 00:01:18
smallfile_wrapper FAIL 00:02:57
sysbench FAIL 00:00:35
uperf-wrapper FAIL 00:01:00
ycsb-wrapper FAIL 00:00:41

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper FAIL 00:01:56
fs_drift_wrapper FAIL 00:01:55
hammerdb FAIL 00:02:00
iperf FAIL 00:00:37
pgbench-wrapper FAIL 00:02:00
smallfile_wrapper FAIL 00:01:55
sysbench FAIL 00:00:48
uperf-wrapper FAIL 00:01:43
ycsb-wrapper FAIL 00:07:44

@dry923
Copy link
Member

dry923 commented Jan 23, 2020

/rerun all

@rht-perf-ci
Copy link

Results for SNAFU CI Test

Test Result Runtime
fio_wrapper PASS 00:06:47
fs_drift_wrapper PASS 00:05:22
hammerdb PASS 00:15:21
iperf PASS 00:03:55
pgbench-wrapper PASS 00:17:14
smallfile_wrapper PASS 00:05:44
sysbench PASS 00:03:59
uperf-wrapper PASS 00:06:14
ycsb-wrapper PASS 00:31:26

@dry923
Copy link
Member

dry923 commented Jan 23, 2020

@bengland2 looks good now. If you can fix the conflicts we should be good to go :-)

@bengland2
Copy link
Contributor Author

@dry923 I see no conflicts at this time reported by git or github.

@dry923
Copy link
Member

dry923 commented Jan 24, 2020

@bengland2 github is showing me:
"This branch cannot be rebased due to conflicts
Rebasing the commits of this branch on top of the base branch cannot be performed automatically due to conflicts encountered while reapplying the individual commits from the head branch."

@bengland2
Copy link
Contributor Author

That's not what github is showing me at this URL, are you somehow at your fork of snafu rather than the cloud-bulldozer/snafu?

Screenshot from 2020-01-24 22-15-06

@dry923
Copy link
Member

dry923 commented Jan 24, 2020

@bengland2 that's very strange. I'm at the same url and its showing the rebase conflicts message.

@aakarshg are you showing this as having conflicts or good to merge? Ben and I are seeing different things :-/

@aakarshg
Copy link
Contributor

@bengland2 @dry923 I see the conflicts too:

This branch cannot be rebased due to conflicts
Rebasing the commits of this branch on top of the base branch cannot be performed automatically due to conflicts encountered while reapplying the individual commits from the head branch.

@bengland2 maybe you'll need to hit refresh once?

@bengland2
Copy link
Contributor Author

@aakarshg @dry923 I just refreshed the page for the 2nd time (first was before my last post) and I still see "This branch has no conflicts with the base branch"!!!! Not only that, but git itself believes this. this totally contradicts my entire experience with github and git. There should be conflicts because of the merged PRs! So I just cancelled this entire pull request, saved the patches, and will close this PR and start over, because I do not understand what is going on here. It's discouraging since I put so much time into this PR, but no other way forward.

[bengland@localhost snafu]$ git status
On branch allow_other_location_account
Untracked files:
...
nothing added to commit but untracked files present (use "git add" to track)
[bengland@localhost snafu]$ git pull https://github.com/cloud-bulldozer/snafu
From https://github.com/cloud-bulldozer/snafu
 * branch            HEAD       -> FETCH_HEAD
Already up to date.
[bengland@localhost snafu]$ more .git/config 
...
[remote "origin"]
	url = https://github.com/bengland2/snafu
	fetch = +refs/heads/*:refs/remotes/origin/*
...

@bengland2
Copy link
Contributor Author

I deleted my snafu directory tree, re-cloned my fork from bengland2/snafu, checked out the branch, and re-pulled from cloud-bulldozer/snafu, still no conflicts! That's why I haven't closed it yet.

@rsevilla87 do you think this is worth an effort to resurrect this PR? Would you do it differently? All I wanted was a way so I didn't have to constantly edit the yaml by hand to insert a different quay.io account name, and a README.md that explained what a developer has to do in order to develop a snafu wrapper. Thx for any ideas you can provide.

@rsevilla87
Copy link
Member

@bengland2 I like the idea behind this PR, we really need it. I would like to avoid using sudo, though I understand is the only way to run podman push safely right now.

@bengland2
Copy link
Contributor Author

@aakarsh, @rsevilla87 kindly posted an issue about the reason for sudo. I don't like to have to do it, but I don't want to run everything from root on my laptop. The way I've done it is such that the sudo disappears when run from the root account (using $SUDO) so it shouldn't affect the CI at all. We can get rid of it later when the workaround is not needed and rootless podman is error-free.

Copy link
Member

@dry923 dry923 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dry923 dry923 merged commit 42cef5e into cloud-bulldozer:master Jan 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ok to test Kick off our CI framework
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants