Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(operator-sdk): upgrade operator sdk version to 1.28.0 #543

Merged
merged 10 commits into from
Apr 13, 2023

Conversation

tthvo
Copy link
Member

@tthvo tthvo commented Mar 28, 2023

Fixes #536
Fixes #455

Upgrade to use operator sdk 1.28.0. Step by step changes:

References:

1.23.0

  • Update controller-gen from 0.9.0 to 0.9.2. Update to use variable CONTROLLER_TOOLS_VERSION for controller-gen version (i.e. sync with operator-sdk go template).
  • Update controller-runtime from 0.12.1 to 0.12.2 and Kubernetes dependencies from 0.24.0 to 0.24.2.
  • Upgrade kube proxy image gcr.io/kubebuilder/kube-rbac-proxy from 0.11.0 to 0.13.0.

1.24.0

Nothing to change. There are some sed ops on ARCH var for arm support but no mentions of Go operators.

1.25.0

  • Bump k8s runtime dependencies to 0.25.0 and controller-runtime to 0.13.0.
    • Upgrade go version to 1.19.
    • Update test client Get func signature.
  • Bump test dependencies (mainly to Gingko v2).
    • Remove custom report (deprecated) for adding newline.
    • Remove ACK_GINKGO_DEPRECATIONS as Gingko is upgrade to v2.
  • Multi-platform build for operator image. docker only and need to install buildx.

1.26.0

Nothing to change.

1.27.0

Nothing to change.

1.28.0

  • Makefile update
    • Bump CONTROLLER_TOOLS_VERSION to 0.11.1 (migration guide said 0.11.3 but scaffold specifies at 0.11.1).
    • Bump ENVTEST_K8S_VERSION to 1.26.
    • Update prerequisites of manager target to include manifests target. Update image build targets to inlude fmt and vet.
    • controller-gen and kustomize is added with a check for version (i.e. download if version is mismatched).
  • Bump k8s runtime dependencies to 0.26.2 and controller-runtime to 0.14.5.
  • Bump kube-rbac-proxy image to 0.13.1.
  • Set --pod-security=restricted when running operator-sdk scorecard. This launches scorecard pods with restricted PSA.

Tests

  • Reference bundle image: quay.io/thvo/cryostat-operator-bundle:latest.
  • Passed scorecards.

@tthvo tthvo added the build label Mar 28, 2023
@tthvo tthvo force-pushed the operator-sdk branch 8 times, most recently from 2905232 to d6ae766 Compare April 11, 2023 19:50
@tthvo tthvo marked this pull request as ready for review April 11, 2023 20:07
@tthvo
Copy link
Member Author

tthvo commented Apr 11, 2023

For multi-arch build, it needs official docker tool with buildx plugin to build. Took me a while to realize the requirement for official docker includes Fedora 36, but I am still on Fedora 35 and using moby-engine. Not sure there are enough time to upgrade to test. Could you help check it out instead?

Also, wondering if we need to add multi-arch build for scorecard images?

Makefile Show resolved Hide resolved
Makefile Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
@ebaron
Copy link
Member

ebaron commented Apr 12, 2023

For multi-arch build, it needs official docker tool with buildx plugin to build. Took me a while to realize the requirement for official docker includes Fedora 36, but I am still on Fedora 35 and using moby-engine. Not sure there are enough time to upgrade to test. Could you help check it out instead?

Also, wondering if we need to add multi-arch build for scorecard images?

I was using moby-engine with the buildx plugin installed into ~/.docker/cli-plugins/: https://github.com/docker/buildx#manual-download

We will eventually need multi-arch scorecard images too, but it doesn't need to be done with this PR.

@tthvo
Copy link
Member Author

tthvo commented Apr 12, 2023

I was using moby-engine with the buildx plugin installed into ~/.docker/cli-plugins/: https://github.com/docker/buildx#manual-download

Oh thanks! I will try that (saw the word not recommend last time so I quickly skipped it :D ).

EDIT: Nice got it to use buildx with moby engine now. Thanks alot!

@tthvo
Copy link
Member Author

tthvo commented Apr 12, 2023

We also need to specify supported OS and ARCH in CSV? https://olm.operatorframework.io/docs/advanced-tasks/ship-operator-supporting-multiarch/

README.md Outdated Show resolved Hide resolved
@andrewazores
Copy link
Member

EDIT: Nice got it to use buildx with moby engine now. Thanks alot!

Any way around the create/use temporary build contexts? If so then this would also be podman compatible, which would be nice...

@ebaron
Copy link
Member

ebaron commented Apr 12, 2023

We also need to specify supported OS and ARCH in CSV? https://olm.operatorframework.io/docs/advanced-tasks/ship-operator-supporting-multiarch/

We do yes, but this may be a bit premature until the operator and all the operand images have published multi-arch images.

@tthvo
Copy link
Member Author

tthvo commented Apr 12, 2023

Any way around the create/use temporary build contexts? If so then this would also be podman compatible, which would be nice...

I am reading this: https://github.com/docker/buildx#working-with-builder-instances. Without creating new builder, there is an error:

ERROR: multiple platforms feature is currently not supported for docker driver. 
Please switch to a different driver (eg. "docker buildx create --use")
make: *** [Makefile:245: oci-buildx] Error 1

Also, seems like docker-buildx needs --push option to be specified. Podman does not have it too?

Also, for Buildah, we need to use manifest list

--platform="OS/ARCH[/VARIANT]"
The --platform flag can be specified more than once, or given a comma-separated list of values as its argument.
When more than one platform is specified, the --manifest option should be used instead of the --tag option.

How about split cases for docker and podman so we can specified command separately?

@ebaron
Copy link
Member

ebaron commented Apr 12, 2023

How about split cases for docker and podman so we can specified command separately?

That seems fine to me, just wrap the extra Docker stuff in ifeq($(IMAGE_BUILDER),docker)

Dockerfile Show resolved Hide resolved
@tthvo
Copy link
Member Author

tthvo commented Apr 12, 2023

I am having a bit of issue with oc-build, where the command

[1/2] STEP 11/11: RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} GO111MODULE=on go build -a -o manager internal/main.go

takes forever to run. Tho, it eventually finishes (happens on first build after bumping go and deps). Wondering if its just my setup or the same for everyone? Or was it because builder container was cached?

@ebaron
Copy link
Member

ebaron commented Apr 12, 2023

I am having a bit of issue with oc-build, where the command

[1/2] STEP 11/11: RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} GO111MODULE=on go build -a -o manager internal/main.go

takes forever to run. Tho, it eventually finishes (happens on first build after bumping go and deps). Wondering if its just my setup or the same for everyone? Or was it because builder container was cached?

It doesn't take long for me. Not sure why there's a difference.

Edit: Just noticed you mentioned after bumping the deps once. That's because the layer that downloads all the new dependencies is cached. So subsequent builds will be faster.

@andrewazores
Copy link
Member

I find that takes quite a while too, and subsequent runs are a lot faster after that container layer is cached. podman image prune -f deletes it because the intermediate is not tagged, and then the next invocation takes a long time again.

That's the case on main though, not just for this PR (which I haven't tested yet)

@tthvo
Copy link
Member Author

tthvo commented Apr 12, 2023

Oh yeh exactly! I have been running prune after every build. That makes sense thanks a lot!

@tthvo
Copy link
Member Author

tthvo commented Apr 12, 2023

Added the handing for podman case. Seems to build as expected (only included 2 amd64 and arm64. ppc64le and s390x takes way to long):

https://quay.io/repository/thvo/cryostat-operator/manifest/sha256:bdda2f96fd1d1c885616ea8eef18a724a8dcf1e73ef8ff5ac799231434d9dc65

For podman, I used the original Dockerfile as it seems to build using emulator, not cross-platform compilation.

Makefile Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
Makefile Outdated Show resolved Hide resolved
Copy link
Member

@ebaron ebaron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Thanks @tthvo!

@ebaron ebaron added the chore Refactor, rename, cleanup, etc. label Apr 13, 2023
@ebaron ebaron merged commit cd14562 into cryostatio:main Apr 13, 2023
@tthvo tthvo deleted the operator-sdk branch April 13, 2023 18:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build chore Refactor, rename, cleanup, etc.
Projects
No open projects
Status: Done
Development

Successfully merging this pull request may close these issues.

Upgrade Operator SDK to >= 1.28.0 Label the scorecard namespace to enforce restricted PSA profile
3 participants