Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup policy for builds #13788

Merged
merged 1 commit into from
May 20, 2017
Merged

Cleanup policy for builds #13788

merged 1 commit into from
May 20, 2017

Conversation

coreydaley
Copy link
Member

Adds ability to set successfulBuildsHistoryLimit and failedBuildsHistoryLimit on
buildConfigs which will prune old builds.

Closes #13640
Completes https://trello.com/c/048p7YRO/1044-5-cleanup-policy-for-builds-builds

@coreydaley
Copy link
Member Author

@coreydaley
Copy link
Member Author

[test]

@coreydaley
Copy link
Member Author

[testextended][extended:core(builds)]

@coreydaley
Copy link
Member Author

@openshift/devex ptal

sortedBuilds := make([]buildapi.Build, len(builds))
copy(sortedBuilds, builds)

for i := len(sortedBuilds) - 1; i >= 0; i-- {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a good reason not to use sort.Sort here?

return nil
}

if bcList.Items[0].Spec.FailedBuildsHistoryLimit != nil || bcList.Items[0].Spec.SuccessfulBuildsHistoryLimit != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: would be tempted to do:

if bcList.Items[0].Spec.FailedBuildsHistoryLimit == nil && bcList.Items[0].Spec.SuccessfulBuildsHistoryLimit == nil {
        return nil
}

// HandleBuildPruning handles the deletion of old successful and failed builds
// based on settings in the BuildConfig.
func HandleBuildPruning(build *buildapi.Build, buildLister buildclient.BuildLister, buildConfigLister buildclient.BuildConfigLister, buildDeleter pruner.BuildDeleter) error {
if build.Status.Config == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should do the following: use util.ConfigNameForBuild to get the bc name from the build. Then use a .Get() to get the single bc without listing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1! didn't know that existed, that should make things much more simple

}

if len(bcList.Items) == 0 {
glog.V(0).Infof("There are no buildconfigs associated with this build: %v/%v", build.Namespace, build.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think V(0) logging is warranted in this routine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in next push


if bcList.Items[0].Spec.FailedBuildsHistoryLimit != nil || bcList.Items[0].Spec.SuccessfulBuildsHistoryLimit != nil {

bList, err := buildLister.List(build.Namespace, kapi.ListOptions{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd consider using util.BuildConfigBuilds here, then you may not need your SortBuildsByPhase routine.

if bcList.Items[0].Spec.FailedBuildsHistoryLimit != nil {
failedBuildsHistoryLimit := int(*bcList.Items[0].Spec.FailedBuildsHistoryLimit)
glog.V(4).Infof("Preparing to prune %v old failed builds.", (len(failedBuilds) - failedBuildsHistoryLimit))
if failedBuildsHistoryLimit == 0 && len(failedBuilds) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you negate your sorting predicate, you should be able to simpify this to something like successfulBuilds[successfulBuildsHistoryLimit:] here.

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 27, 2017
@coreydaley coreydaley changed the title (WIP) Cleanup policy for builds Cleanup policy for builds Apr 27, 2017
@coreydaley
Copy link
Member Author

Thanks @jim-minter , your suggestions cleaned up the code quite a bit, and I'll have those "tricks" in my arsenal for next time!

@coreydaley
Copy link
Member Author

@openshift/api-review ptal

@@ -11,6 +11,11 @@ type BuildConfigGetter interface {
Get(namespace, name string) (*buildapi.BuildConfig, error)
}

// BuildConfigLister provides methods for listing the BuildConfigs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes to this file may now be superfluous.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in next push

return nil
}

successfulBuilds, err := buildutil.BuildConfigBuilds(buildLister, build.Namespace, bcName, isCompletedBuild)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you move these lines into if buildConfig.Spec.SuccessfulBuildsHistoryLimit != nil { (x2), you can remove the if clause above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

if buildConfig.Spec.SuccessfulBuildsHistoryLimit != nil {
successfulBuildsHistoryLimit := int(*buildConfig.Spec.SuccessfulBuildsHistoryLimit)
glog.V(4).Infof("Preparing to prune %v old successful builds.", (len(successfulBuilds.Items) - successfulBuildsHistoryLimit))
if successfulBuildsHistoryLimit == 0 && len(successfulBuilds.Items) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the if ... else is necessary, I think just the following will do:

if len(successfulBuilds.Items) > successfulBuildsHistoryLimit {
	buildsToDelete = append(buildsToDelete, successfulBuilds.Items[successfulBuildsHistoryLimit:]...)
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's a leftover from the way it was being done before, fixed in next push


// SuccessfulBuildsHistoryLimit is the number of old successful builds to retain.
// This field is a pointer to allow for differentiation between an explicit zero and not specified.
SuccessfulBuildsHistoryLimit *int32 `json:"successfulBuildsHistoryLimit,omitempty" protobuf:"varint,4,opt,name=successfulBuildsHistoryLimit"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to validate that these values, if they exist, are >= 0

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

implemented in next push

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 27, 2017
@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

re[test]

1 similar comment
@deads2k
Copy link
Contributor

deads2k commented Apr 27, 2017

re[test]

@@ -148,6 +148,18 @@ func ValidateBuildConfig(config *buildapi.BuildConfig) field.ErrorList {
"run policy must Parallel, Serial, or SerialLatestOnly"))
}

if config.Spec.SuccessfulBuildsHistoryLimit != nil {
if *config.Spec.SuccessfulBuildsHistoryLimit < 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a ValidateNonnegativeField helper

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

@@ -236,7 +246,7 @@ func (bc *BuildPodController) HandlePod(pod *kapi.Pod) error {
case buildapi.BuildPhaseError, buildapi.BuildPhaseFailed:
bc.recorder.Eventf(build, kapi.EventTypeNormal, buildapi.BuildFailedEventReason, fmt.Sprintf(buildapi.BuildFailedEventMessage, build.Namespace, build.Name))
}
common.HandleBuildCompletion(build, bc.runPolicies)
common.HandleBuildCompletion(build, bc.buildLister, bc.buildConfigGetter, bc.buildDeleter, bc.runPolicies)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handling this only when a running build transitions to a new phase is not optimal. For example, if I set the limits to a BC without running any build, I would expect any prunable builds to be removed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you are just setting the limits on a BC, what would trigger the pruning then? saving the BC?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

successfulBuildsHistoryLimit := int(*buildConfig.Spec.SuccessfulBuildsHistoryLimit)
glog.V(4).Infof("Preparing to prune %v old successful builds.", (len(successfulBuilds.Items) - successfulBuildsHistoryLimit))
if len(successfulBuilds.Items) > successfulBuildsHistoryLimit {
buildsToDelete = append(buildsToDelete, successfulBuilds.Items...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You shouldn't append all of the builds?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push (delete the wrong line!)

for i, b := range buildsToDelete {
glog.V(4).Infof("Pruning old build: %s/%s", b.Namespace, b.Name)
if err := buildDeleter.DeleteBuild(&buildsToDelete[i]); err != nil {
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can continue deleting other builds and return an aggregate error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just concat all the err strings?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a NewAggregate helper which takes a slice of errors

@@ -148,6 +148,16 @@ func ValidateBuildConfig(config *buildapi.BuildConfig) field.ErrorList {
"run policy must Parallel, Serial, or SerialLatestOnly"))
}

if config.Spec.SuccessfulBuildsHistoryLimit != nil {
allErrs = append(allErrs, validation.ValidateNonnegativeField(int64(*config.Spec.SuccessfulBuildsHistoryLimit), specPath.Child("successfulBuildsHistoryLimit"))...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete newline

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated in next push


if config.Spec.FailedBuildsHistoryLimit != nil {
allErrs = append(allErrs, validation.ValidateNonnegativeField(int64(*config.Spec.FailedBuildsHistoryLimit), specPath.Child("failedBuildsHistoryLimit"))...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete newline

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated in next push

@@ -64,13 +68,19 @@ func NewBuildPodController(buildInformer cache.SharedIndexInformer, podInformer
eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: v1core.New(extkc.Core().RESTClient()).Events("")})

buildListerUpdater := buildclient.NewOSClientBuildClient(oc)
buildConfigGetter := buildclient.NewOSClientBuildConfigClient(oc)
buildDeleter := pruner.NewBuildDeleter(oc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refactor BuildDeleter out of the prune backage and into where the other build clients live.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push


func isFailedBuild(build buildapi.Build) bool {
return build.Status.Phase == buildapi.BuildPhaseFailed
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think it's worth introducing these helpers unless you're going to check multiple phases, but if you are set on keeping them, please move them into pkg/build/util.go where we already have some helpers for checking build phases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that there is an existing IsBuildComplete helper which checks if the build is in any terminal state. you might want to rename it to IsBuildTerminated or change it to IsBuildActive and reverse all the boolean checks that were calling the old function.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

}

func (b ByStartTimestamp) Less(i, j int) bool {
return !b[j].Status.StartTimestamp.Time.After(b[i].Status.StartTimestamp.Time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd expect to sort by creation time, not start time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

// HandleBuildPruning handles the deletion of old successful and failed builds
// based on settings in the BuildConfig.
func HandleBuildPruning(build *buildapi.Build, buildLister buildclient.BuildLister, buildConfigGetter buildclient.BuildConfigGetter, buildDeleter pruner.BuildDeleter) error {
bcName := buildutil.ConfigNameForBuild(build)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you get back nil/emptystring here, just bail, it means the build isn't associated with a buildconfig so there's nothing to do.

t.Errorf("should have set the CompletionTimestamp, but instead it was nil")
}
if build.Status.Duration > 0 {
t.Errorf("should have set the Durationi to 0s, but instead it was %v", build.Status.Duration)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Durationi/Duration/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

},
Spec: buildapi.BuildConfigSpec{
SuccessfulBuildsHistoryLimit: &buildsToKeep,
FailedBuildsHistoryLimit: &buildsToKeep,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use different numbers for the failed count and successful count. by using the same number you have no way of knowing if your code is using the right field for the right build type.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

func HandleBuildPruning(build *buildapi.Build, buildLister buildclient.BuildLister, buildConfigGetter buildclient.BuildConfigGetter, buildDeleter pruner.BuildDeleter) error {
bcName := buildutil.ConfigNameForBuild(build)

buildConfig, err := buildConfigGetter.Get(build.Namespace, bcName, metav1.GetOptions{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of going to the api to the get the buildconfig (and the builds), we should be going to the sharedinformer cache. @csrwng might be able to give you some pointers.

@coreydaley
Copy link
Member Author

@bparees @Kargakis ptal

@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels May 3, 2017
Copy link
Contributor

@bparees bparees left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a handful of minor nits, a follow up for using caches, and also i'd like to see at least one extended test case that also validates the pruning behavior for both completed and failed builds.

// recorder is used to record events.
Recorder record.EventRecorder
}

func (c *BuildConfigController) HandleBuildConfig(bc *buildapi.BuildConfig) error {
glog.V(4).Infof("Handling BuildConfig %s/%s", bc.Namespace, bc.Name)

glog.V(4).Infof("Handling Build Pruning: %v/%v", bc.Namespace, bc.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this logging is redundant, it doesn't add any information to what was logged above.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

glog.V(4).Infof("Handling Build Pruning: %v/%v", bc.Namespace, bc.Name)
if err := buildutil.HandleBuildPruning(bc.Name, bc.Namespace, c.BuildLister, c.BuildConfigGetter, c.BuildDeleter); err != nil {
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. why isn't this returning the same wrapper error message as HandleBuildCompletion does when it has an error pruning?

  2. i think you should proceed through the rest of the HandleBuildConfig logic and return an aggregated error object at the end, rather than quitting here if something goes wrong. Even if pruning fails, we still want to kick off the new build.

(Unless the HandleBuildConfig event gets retried when an error is encountered?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

sort.Sort(ByCreationTimestamp(successfulBuilds.Items))

successfulBuildsHistoryLimit := int(*buildConfig.Spec.SuccessfulBuildsHistoryLimit)
glog.V(4).Infof("Preparing to prune %v old successful builds.", (len(successfulBuilds.Items) - successfulBuildsHistoryLimit))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this log into the if block, and/or log more details: number of builds found, number of builds configured to keep, number that will be pruned.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

sort.Sort(ByCreationTimestamp(failedBuilds.Items))

failedBuildsHistoryLimit := int(*buildConfig.Spec.FailedBuildsHistoryLimit)
glog.V(4).Infof("Preparing to prune %v old failed builds.", (len(failedBuilds.Items) - failedBuildsHistoryLimit))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for this log message.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

}

if !reflect.DeepEqual(failedStartingBuilds.Items[:3], failedRemainingBuilds.Items) {
t.Errorf("the two most recent failed builds should be left, but they were not")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/two/three/

and for both this and the case above, it's generally useful to print the actual and expected values (in this case the lists of builds) to help debug if it does fail.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push


// HandleBuildPruning handles the deletion of old successful and failed builds
// based on settings in the BuildConfig.
func HandleBuildPruning(buildConfigName string, namespace string, buildLister buildclient.BuildLister, buildConfigGetter buildclient.BuildConfigGetter, buildDeleter buildclient.BuildDeleter) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm still a little bothered this function isn't using the shared informer caches to get the buildconfig+builds, but we can fix it in a follow up. Please open an issue to track the need to deal with it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #14049

fmt.Fprintf(g.GinkgoWriter, "%v", err)
}

o.ExpectWithOffset(1, int32(len(builds.Items))).To(o.Equal(*buildConfig.Spec.SuccessfulBuildsHistoryLimit), "there should be %v completed builds left after pruning, but instead there were %v", *buildConfig.Spec.SuccessfulBuildsHistoryLimit, len(builds.Items))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this an expect with offset?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

fmt.Fprintf(g.GinkgoWriter, "%v", err)
}

o.ExpectWithOffset(1, int32(len(builds.Items))).To(o.Equal(*buildConfig.Spec.FailedBuildsHistoryLimit), "there should be %v failed builds left after pruning, but instead there were %v", *buildConfig.Spec.FailedBuildsHistoryLimit, len(builds.Items))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one too

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next push

@liggitt
Copy link
Contributor

liggitt commented May 15, 2017

API LGTM (matches successfulJobsHistoryLimit/failedJobsHistoryLimit naming and types)

@bparees
Copy link
Contributor

bparees commented May 15, 2017

flake #14197

[merge]

@bparees
Copy link
Contributor

bparees commented May 16, 2017

@coreydaley you pushed another commit, what did you change? that cancels my merge tag, fyi.

@coreydaley
Copy link
Member Author

@bparees I just rebased, didn't see your merge tag, was trying to rerun the tests.

@bparees
Copy link
Contributor

bparees commented May 16, 2017

@coreydaley you appear to have picked up a compile error w/ your rebase due to a signature change:

[WARNING] `go test` had the following output to stderr:
# github.com/openshift/origin/pkg/build/controller/common
pkg/build/controller/common/util_test.go:91: not enough arguments in call to SetBuildCompletionTimeAndDuration

Adds ability to set successfulBuildsHistoryLimit and failedBuildsHistoryLimit on
buildConfigs which will prune old builds.

Closes #13640
Completes https://trello.com/c/048p7YRO/1044-5-cleanup-policy-for-builds-builds
@coreydaley
Copy link
Member Author

@bparees Fixed in latest push

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 2afdad0

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test Running (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/1496/) (Base Commit: b96ef9a)

@openshift-bot
Copy link
Contributor

Evaluated for origin testextended up to 2afdad0

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/testextended SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended/407/) (Base Commit: b96ef9a) (Extended Tests: core(builds))

@coreydaley
Copy link
Member Author

@bparees ptal and hit me with another merge tag

@bparees
Copy link
Contributor

bparees commented May 16, 2017

[merge]
/kabam

@bparees
Copy link
Contributor

bparees commented May 19, 2017

flake #14236
[merge]

@bparees
Copy link
Contributor

bparees commented May 19, 2017

flake #14241
[merge]

@bparees
Copy link
Contributor

bparees commented May 19, 2017

flake #14129
[merge]

@bparees
Copy link
Contributor

bparees commented May 19, 2017

flake #13980
[merge]

@bparees
Copy link
Contributor

bparees commented May 19, 2017

flake #14259
[merge]

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 2afdad0

@openshift-bot
Copy link
Contributor

openshift-bot commented May 20, 2017

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/712/) (Base Commit: 938d40f) (Image: devenv-rhel7_6247)


builds, err := oc.Client().Builds(oc.Namespace()).List(metav1.ListOptions{})
if err != nil {
fmt.Fprintf(g.GinkgoWriter, "%v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and above: was it intentionally to not print a newline at the end of the error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants