-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build(operator-sdk): upgrade operator sdk version to 1.28.0 #543
Conversation
2905232
to
d6ae766
Compare
For multi-arch build, it needs official Also, wondering if we need to add multi-arch build for scorecard images? |
I was using We will eventually need multi-arch scorecard images too, but it doesn't need to be done with this PR. |
Oh thanks! I will try that (saw the word EDIT: Nice got it to use buildx with moby engine now. Thanks alot! |
We also need to specify supported OS and ARCH in CSV? https://olm.operatorframework.io/docs/advanced-tasks/ship-operator-supporting-multiarch/ |
Any way around the |
We do yes, but this may be a bit premature until the operator and all the operand images have published multi-arch images. |
I am reading this: https://github.com/docker/buildx#working-with-builder-instances. Without creating new builder, there is an error:
Also, seems like Also, for Buildah, we need to use manifest list
How about split cases for docker and podman so we can specified command separately? |
That seems fine to me, just wrap the extra Docker stuff in |
I am having a bit of issue with [1/2] STEP 11/11: RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} GO111MODULE=on go build -a -o manager internal/main.go takes forever to run. Tho, it eventually finishes (happens on first build after bumping go and deps). Wondering if its just my setup or the same for everyone? Or was it because builder container was cached? |
It doesn't take long for me. Not sure why there's a difference. Edit: Just noticed you mentioned after bumping the deps once. That's because the layer that downloads all the new dependencies is cached. So subsequent builds will be faster. |
I find that takes quite a while too, and subsequent runs are a lot faster after that container layer is cached. That's the case on |
Oh yeh exactly! I have been running prune after every build. That makes sense thanks a lot! |
Added the handing for podman case. Seems to build as expected (only included 2 amd64 and arm64. ppc64le and s390x takes way to long): For podman, I used the original Dockerfile as it seems to build using emulator, not cross-platform compilation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Thanks @tthvo!
Fixes #536
Fixes #455
Upgrade to use operator sdk
1.28.0
. Step by step changes:References:
1.23.0
0.9.0
to0.9.2
. Update to use variableCONTROLLER_TOOLS_VERSION
for controller-gen version (i.e. sync with operator-sdk go template).0.12.1
to0.12.2
and Kubernetes dependencies from0.24.0
to0.24.2
.0.11.0
to0.13.0
.1.24.0
Nothing to change. There are some
sed
ops onARCH
var for arm support but no mentions of Go operators.1.25.0
0.25.0
and controller-runtime to0.13.0
.1.19
.ACK_GINKGO_DEPRECATIONS
as Gingko is upgrade to v2.docker
only and need to installbuildx
.1.26.0
Nothing to change.
1.27.0
Nothing to change.
1.28.0
CONTROLLER_TOOLS_VERSION
to0.11.1
(migration guide said0.11.3
but scaffold specifies at0.11.1
).ENVTEST_K8S_VERSION
to1.26
.manager
target to includemanifests
target. Update image build targets to inludefmt
andvet
.controller-gen
andkustomize
is added with a check for version (i.e. download if version is mismatched).0.26.2
and controller-runtime to0.14.5
.kube-rbac-proxy
image to0.13.1
.--pod-security=restricted
when runningoperator-sdk scorecard
. This launches scorecard pods with restricted PSA.Tests
quay.io/thvo/cryostat-operator-bundle:latest
.