Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: allow csi-snapshotter to be enabled without installing CRDs #481

Merged

Conversation

navilg
Copy link
Contributor

@navilg navilg commented Jul 13, 2023

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind bug

What this PR does / why we need it:

When VolumeSnapshot,VolumeSnapshotClass or VolumeSnapshotContent CRD already exists on K8s cluster, helm fails to install the chart on cluster if externalSnapshotter is enabled.

With these changes, externalSnapshotter can be enabled without trying to re-create those CRDs if they are already present in cluster.

Which issue(s) this PR fixes:

Fixes #467

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

To deploy csi-snapshotter without installing any of CRDs, Install with below flags:

--set externalSnapshotter.customResourceDefinitions.enabled=false

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. labels Jul 13, 2023
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jul 13, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jul 13, 2023
@k8s-ci-robot
Copy link
Contributor

Welcome @navilg!

It looks like this is your first PR to kubernetes-csi/csi-driver-nfs 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-csi/csi-driver-nfs has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jul 13, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @navilg. Thanks for your PR.

I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jul 13, 2023
@navilg
Copy link
Contributor Author

navilg commented Jul 13, 2023

/easycla

@gclawes
Copy link

gclawes commented Jul 13, 2023

Is there ever a situation where only some of the snapshot CRD should be enabled?

Copy link
Member

@wozniakjan wozniakjan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there ever a situation where only some of the snapshot CRD should be enabled?

there are imho two scenarios

  1. the user has external-snapshotter already installed for some other CSI driver
  • in this case, it makes sense to not install CRDs and resources related to the snapshot-controller
  1. the user doesn't have external-snapshotter at all
  • in this case, they will need all of the CRDs and the controller as well

I would be in favour of introducing just a single config knob for the snahpshot-controller related resources and CRDs combined, wdyt?

@navilg
Copy link
Contributor Author

navilg commented Jul 14, 2023

I would be in favour of introducing just a single config knob for the snahpshot-controller related resources and CRDs combined, wdyt?

A cluster may have multiple csi-drivers of other third-party or cloud-native storage solutions. In those cases (scenario 1), CRDs will already exist and helm install will fail. So single config knob for snapshot-controller and CRDs combined would be problematic here.

imo, if external-snapshotter is enabled AND CRDs are enabled, in that case only CRDs should be deployed. Instead of multiple knobs (each for one CRD), We may look into having single knob for all 3 CRDs. And we can keep external snapshotter and CRDs enabled by default.

@gclawes
Copy link

gclawes commented Jul 14, 2023

Is there ever a situation where only some of the snapshot CRD should be enabled?

My point is more why do we need a separate boolean value for each of the 3 CRDs? Why not a single one to enable/disable all 3 CRDs. I don't think there's ever a situation where a cluster admin would only install 2 of the 3 CRDs for snapshots, right?

Also, I'm getting a parse error with the latest version of the PR:

$ helm upgrade --install csi-driver-nfs ~/repos/github.com/navilg/csi-driver-nfs/charts/latest/csi-driver-nfs --namespace kube-system --version v0.0.0 -f helm/csi-driver-nfs-values.yaml
Release "csi-driver-nfs" does not exist. Installing it now.
Error: parse error at (csi-driver-nfs/templates/crd-csi-snapshot.yaml:2): "-"

for values:

externalSnapshotter:
  customResourceDefinitions:
    volumeSnapshot: false
    volumeSnapshotClass: false
    volumeSnapshotContent: false

@gclawes
Copy link

gclawes commented Jul 14, 2023

As far as the knobs go, I think it may be useful to decouple the CRDs and the snapshot-controller, you may want to install the CRDs separately in the cluster lifecycle from the actual controller (especially given that helm can make CRDs annoying: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/).

But the original issue in #467 (clarified here) was the need to decouple:

  • The installation of the CRDs and the snapshot-controller Deployment
  • The csi-snapshotter sidecar in the csi-nfs-controller Deployment.

That way a cluster admin could install the CRDs and/or snapshot-controller separately from this chart, and still enable the csi-snapshooter sidecar in this chart.

Right now both the csi-snapshot-controller Deployment and the sidecar are controlled by .Values.externalSnapshotter.enabled:

{{- if .Values.externalSnapshotter.enabled -}}

{{- if .Values.externalSnapshotter.enabled }}
- name: csi-snapshotter
image: "{{ .Values.image.csiSnapshotter.repository }}:{{ .Values.image.csiSnapshotter.tag }}"
args:
- "--v=2"
- "--csi-address=$(ADDRESS)"
- "--leader-election-namespace={{ .Release.Namespace }}"
- "--leader-election"
env:
- name: ADDRESS
value: /csi/csi.sock
imagePullPolicy: {{ .Values.image.csiSnapshotter.pullPolicy }}
resources: {{- toYaml .Values.controller.resources.csiSnapshotter | nindent 12 }}
volumeMounts:
- name: socket-dir
mountPath: /csi
{{- end }}

@wozniakjan
Copy link
Member

wozniakjan commented Jul 14, 2023

My point is more why do we need a separate boolean value for each of the 3 CRDs?

+1

But the original issue in #467 (clarified here) was the need to decouple:

  • The installation of the CRDs and the snapshot-controller Deployment
  • The csi-snapshotter sidecar in the csi-nfs-controller Deployment.

+1 and all of my thumbs are up

@navilg
Copy link
Contributor Author

navilg commented Jul 14, 2023

My point is more why do we need a separate boolean value for each of the 3 CRDs? Why not a single one to enable/disable all 3 CRDs. I don't think there's ever a situation where a cluster admin would only install 2 of the 3 CRDs for snapshots, right?

Yeah. We can have one config to enable/disable all 3 CRDs instead of having them individually enabled/disabled.

@navilg
Copy link
Contributor Author

navilg commented Jul 14, 2023

  • The csi-snapshotter sidecar in the csi-nfs-controller Deployment.

So a separate knob for CRDS (one for all three CRDs) and another knob for csi-snapshotter sidecar. I think we can add this. Let me know if I should go ahead and try this @wozniakjan @gclawes . Or do you have any other alternatives

@wozniakjan
Copy link
Member

wozniakjan commented Jul 14, 2023

So a separate knob for CRDS (one for all three CRDs) and another knob for csi-snapshotter sidecar

I would keep the sidecar controlled only by .externalSnapshotter.enabled. But please consider coupling the snapshot-controller with the external-snapshotter CRDs. If a cluster has these CRDs for another CSI driver, it likely has the controller as well.

I would just stick to the description in #467, imho it proposes ideal solutions.

@gclawes
Copy link

gclawes commented Jul 14, 2023

I think this works. Another option is to use the crds/ folder of helm3: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-1-let-helm-do-it-for-you

That way, CRDs can be skipped with the helm install --skip-crds flag. Using this approach means updates to the CRDs must be applied manually though, as cautioned in the helm docs.

There is really no "good" way to do CRDs with helm, only least-bad ways depending on the situation (for examplekube-prometheus-stack discussions below):
prometheus-community/helm-charts#2921
prometheus-community/helm-charts#2612

@kfox1111
Copy link

If moved to the crds dir, make sure you have at least one release where the old crd has the helm annotation to cause it to not delete it when its removed from helm manifest maintenance.

@gclawes
Copy link

gclawes commented Jul 14, 2023

If moved to the crds dir, make sure you have at least one release where the old crd has the helm annotation to cause it to not delete it when its removed from helm manifest maintenance.

Yeah, I think this would require very specific upgrade instructions to preserve CRD state without accidentally removing them.

@andyzhangx
Copy link
Member

My point is more why do we need a separate boolean value for each of the 3 CRDs?

+1

But the original issue in #467 (clarified here) was the need to decouple:

  • The installation of the CRDs and the snapshot-controller Deployment
  • The csi-snapshotter sidecar in the csi-nfs-controller Deployment.

+1 and all of my thumbs are up

agree, we only need on boolean for all the 3 CRDs

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jul 15, 2023
@navilg
Copy link
Contributor Author

navilg commented Jul 17, 2023

Replaced three boolean values with single boolean value to install (or not install) all 3 CRDs.

@andyzhangx
Copy link
Member

@navilg could you rebase to master branch and also squash all commits? thanks.

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jul 18, 2023
@navilg navilg force-pushed the 467-csi-snapshotter-without-crd branch from ecf9751 to 3d3bc3d Compare July 18, 2023 16:55
@navilg
Copy link
Contributor Author

navilg commented Jul 18, 2023

@andyzhangx Done

@andyzhangx
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 18, 2023
@navilg navilg force-pushed the 467-csi-snapshotter-without-crd branch from 77be759 to 7b636c8 Compare July 19, 2023 17:16
@kfox1111
Copy link

Not sure I need to file another issue for this or not....

Say I have the external snapshoter installed via: https://artifacthub.io/packages/helm/piraeus-charts/snapshot-controller

With this flag it still seems like there is some missing functionality to be able to share a snapshot-controller like this?

https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/v4.4.0/csi-driver-nfs/templates/csi-nfs-controller.yaml#L69-L85 is still needed to be enabled, but the whole controller is also deployed in that case? https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/v4.4.0/csi-driver-nfs/templates/csi-snapshot-controller.yaml#L1

@andyzhangx
Copy link
Member

Not sure I need to file another issue for this or not....

Say I have the external snapshoter installed via: https://artifacthub.io/packages/helm/piraeus-charts/snapshot-controller

With this flag it still seems like there is some missing functionality to be able to share a snapshot-controller like this?

https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/v4.4.0/csi-driver-nfs/templates/csi-nfs-controller.yaml#L69-L85 is still needed to be enabled, but the whole controller is also deployed in that case? https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/charts/v4.4.0/csi-driver-nfs/templates/csi-snapshot-controller.yaml#L1

@kfox1111 good catch, I will fix this after this PR merge.

@andyzhangx andyzhangx changed the title #467 Allow csi-snapshotter to be enabled without installing CRDs feat: allow csi-snapshotter to be enabled without installing CRDs Jul 20, 2023
Copy link
Member

@andyzhangx andyzhangx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 20, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andyzhangx, navilg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 20, 2023
@k8s-ci-robot k8s-ci-robot merged commit 7f8802f into kubernetes-csi:master Jul 20, 2023
11 checks passed
@andyzhangx
Copy link
Member

andyzhangx commented Jul 20, 2023

@kfox1111 would be fixed by #490

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow csi-snapshotter to be enabled without installing CRDs
6 participants