Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the v4.0.3 version about to be released soon #330

Open
hollycai05 opened this issue Feb 22, 2024 · 9 comments
Open

Is the v4.0.3 version about to be released soon #330

hollycai05 opened this issue Feb 22, 2024 · 9 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@hollycai05
Copy link

hollycai05 commented Feb 22, 2024

I noticed the v4.0.3 changeLog in https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/CHANGELOG.md.
Does this mean that v4.0.3 will be released soon? There are some high CVEs in v4.0.2 (or the newer version 4.0.18 release), so really looking forward to v4.0.3 being released ASAP.
Thanks

@yonatankahana
Copy link
Contributor

That's the plan.

Ping @kmova, everything is merged and ready

@flo-mic
Copy link

flo-mic commented Feb 29, 2024

@hollycai05 while 4.0.3 is not released, you can still rebuild the provisioner on your own to get rid of the high vulnerabilities. Here some example dockerfile to build the latest provisioner

FROM golang:1.19 as builder
RUN git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner 
WORKDIR /go/nfs-subdir-external-provisioner
RUN make

FROM gcr.io/distroless/static:latest
ARG binary=./bin/nfs-subdir-external-provisioner
COPY --from=builder /go/nfs-subdir-external-provisioner/bin/nfs-subdir-external-provisioner /nfs-subdir-external-provisioner
ENTRYPOINT ["/nfs-subdir-external-provisioner"]

@yonatankahana
Copy link
Contributor

@hollycai05 while 4.0.3 is not released, you can still rebuild the provisioner on your own to get rid of the high vulnerabilities. Here some example dockerfile to build the latest provisioner

FROM golang:1.19 as builder
RUN git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner 
WORKDIR /go/nfs-subdir-external-provisioner
RUN make

FROM gcr.io/distroless/static:latest
ARG binary=./bin/nfs-subdir-external-provisioner
COPY --from=builder /go/nfs-subdir-external-provisioner/bin/nfs-subdir-external-provisioner /nfs-subdir-external-provisioner
ENTRYPOINT ["/nfs-subdir-external-provisioner"]

you can also use pre-built RC image: quay.io/yonatankahana/nfs-subdir-external-provisioner:v4.0.3-rc2

@rishithaminol
Copy link

@yonatankahana What about golang 1.22.1.

Have you tried that?

@yonatankahana
Copy link
Contributor

@yonatankahana What about golang 1.22.1.

Have you tried that?

not yet, I will try to find some time in the weekend.

@Starttoaster
Copy link

Starttoaster commented Jun 5, 2024

I'm currently in the process of forking this repo and making my own release to clear up my trivy dashboard, as this is currently the most vulnerable thing running in my cluster, with 3 critical CVEs, and based on an end-of-service-life base OS image. Would be cool to get an update on if this could get some first party support time though. Understand of course that many FOSS projects are often maintained by one unpaid individual :) This one was just surprising to me being underneath kubernetes-sigs. Is there a more maintained NFS subdir provisioner project out there?

I'll let you know how my fork goes, by the way. I'm switching to a distroless base image, removing the usage of a vendors directory (because why?), remove hopefully all of the replace statements in go.mod, updating all dependencies in the go.mod, and updating to use latest golang to build. It all still builds, I guess the question is will it just work as a replacement when I install it in my cluster. Time will tell on that.

@Starttoaster
Copy link

Starttoaster commented Jun 6, 2024

Happy to say I have my fork of this project up and operational in my cluster. It was somewhat not an easy task, at least for me, since I hadn't used the upstream https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner library this project uses before. This was previously the most vulnerable thing running in my cluster with 3 critical CVEs, 56 high CVEs, and an end-of-service-life base image, to no recorded CVEs in my fork (according to Trivy.)

Updates:

  • I updated Golang to latest - this appeared to be a free update without consequences.
  • I switched to a distroless base image - this appeared to be a free change without consequences.
  • I updated all the kubernetes client-go (and related k8s toolkit modules) - this was an update that had consequences, requiring a huge update to the sig-storage-lib-external-provisioner dependency from v6 (used by this repo) to v10 (the latest release of the upstream library.) It also required some code changes that seem fairly trivial imho to satisfy the new interface of the sig-storage-lib-external-provisioner library. It also required some RBAC changes in the helm chart. If the maintainers here have been hesitant to show this project any love recently, I assume this update is what they've been dreading figuring out.

The majority of this work happened in the following pull requests:

The first link will be hard to read, because I also deleted whole directories of things that I didn't need to build the image like the vendors directory, and the huge release helper directory, my release workflows I add to most of my projects seem to replace their function sufficiently.
This repo also contained 4 different Dockerfiles, which I reduced to 1, because in my opinion it makes it harder to recognize which Dockerfile led to the image in the registry.
I also deleted the kustomize deploy option -- this is a change I'm somewhat okay with reversing. But I figured it would be easier for me to solely maintain with only one deployment option. And I don't use kustomize as often, so I chose to keep helm.

I can't say that I really recommend everyone switch to my fork. I won't do an amazing job of maintaining it, to be frank. But I can say that I have personally been very tired of seeing this stare at me from the top of my cluster vulnerabilities dashboard. This will be so much easier to maintain anyway just using a distroless base image. So if you choose to migrate to my fork, beware that my version is intended to be little more than a stopgap for the upstream (this project) to see maintainership time again. And if the maintainers here want to peek at my fork for help with updating this repo to the new, PLEASE, do not hesitate. ... What I will do, however, is try to keep on top of updates and CVEs, and read as well as respond to Issues.

To update:
If you're a user of the helm chart here, there isn't anything you need to do other than switch from their helm repo, to my helm repo, and run helm upgrade. Disclaimer: I frankly don't know that I can recommend running this fork in production in a company setting, and I'll take NO responsibility for any service downtime rendered from a bad change in my fork. What I can say though is, "works for me, and is massively more up to date than this project is (at time of writing.)"

@Starttoaster
Copy link

Starttoaster commented Jun 6, 2024

More specifically talking to the maintainers here: I could take some stabs at PRs to contribute these updates here, but basically my investment was needing to update everything because pretty much everything is incredibly out of date. It would help me immensely if you reviewed my fork and let me know what you want me to carry over to this repo specifically. For example I don't know if all those Dockerfiles are necessary for something I'm not aware of, since I replaced all of them with one, and the image's OS is where Trivy found most of its vulnerabilities.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

7 participants