Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use provisioner for existing NFS shares #169

Closed
abinet opened this issue Jan 11, 2022 · 17 comments
Closed

Use provisioner for existing NFS shares #169

abinet opened this issue Jan 11, 2022 · 17 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@abinet
Copy link

abinet commented Jan 11, 2022

Is it possible to use the provisioner for existing NFS shares?
I have an NFS server with folders configured already. All I need is to create PVs with server/path/mountingoptions dynamically from PVC without creating any subfolders folder in mounted volume. Is it possible with this provisioner?

@vicyap
Copy link

vicyap commented Jan 26, 2022

Hi @abinet, I'm new to this project, but your question sounds very similar to something I was trying to do.

I'm curious what is the use case where you need to create PVs with server/path/mount-options dynamically instead of creating it once, statically, and sharing a PVC with all your pods?

@abinet
Copy link
Author

abinet commented Jan 26, 2022

Hi @vicyap, thank you for the question.

I am aware about the discussions here:
kubernetes/kubernetes#60729
kubernetes/community#321 (comment)

Hovewer, it is all regarding responsibilities: creating PVs manually must be done by cluster admin only. Creating PVs dynamically from PVC can be done by cluster user with namespace limited permissions.
We can not use NFS v1Volumes because of specific mount options (ver3 etc. ) And as cluster admin I don't want to create PV any time when sobody needs an existing NFS share. Instead of that it would be great just to create a StorageClass with necessary mountOptions and let users create PVCs.

@vicyap
Copy link

vicyap commented Jan 27, 2022

Is this project closer to what you're looking for? https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/example

@abinet
Copy link
Author

abinet commented Jan 28, 2022

unfortunatelly the csi-driver has same limitation- for every PVC it creates a subfolder in NFSs server shared folder and does not allow re-usage of the existing one.

@gilesknap
Copy link

Hi, I've come looking for the same feature and my use case is as follows:

I have torn down my entire cluster and rebuilt it because I'm experimenting with IaC. I can recreate my deployments from YAML in a git repo. But I cannot reconnect them to the existing PVCs that are still on my NFS server. I would like to be able to update my deployments to have a permanent shared folder name so that re-creating them from scratch connects them to the same data.

I believe this would match the behaviour of the existingClaim feature of grafana PVCs described here https://medium.com/@kevincoakley/reusable-persistent-volumes-with-the-existingclaim-option-for-the-grafana-prometheus-operator-84568b96315

@ppenzo
Copy link

ppenzo commented Apr 1, 2022

I has the same need for both NFS and SMB protocols.

Whereas for SMB it is still a work in progress (see #kubernetes-csi/csi-driver-smb#398), for NFS I've solved it by using the pathPattern parameter in the StorageClass definition and by mounting /persistentvolumes on an EmptyDir volume in the provisioner.

@vavdoshka
Copy link

vavdoshka commented Jun 25, 2022

@ppenzo can you please elaborate on your workaround?

I thought one can set customPath to an empty string to explicitly not create the directory, but it won't work.
Will use a default name instead. (implemented in #83)

pathPattern, exists := options.StorageClass.Parameters["pathPattern"]
if exists {
customPath := metadata.stringParser(pathPattern)
if customPath != "" {
path = filepath.Join(p.path, customPath)
fullPath = filepath.Join(mountPath, customPath)
}
}

@ppenzo
Copy link

ppenzo commented Jun 27, 2022

@vavdoshka:
In the provisioner deployment (see: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/deploy/deployment.yaml) mount /persistentvolumes to an ephemeral volume, i.e. put

      volumes:
      - emptyDir: {}
        name: nfs-client-root

in the deployment.
Then define the corresponding storageclass with these parmeters

parameters:
  archiveOnDelete: "false"
  pathPattern: ${.PVC.annotations.nfs.io/storage-path}

Hence define the PVC with the annotation nfs.io/storage-path referring the NFS path on the NFS server/Filer referenced by your
provisioner:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  annotations:
    nfs.io/storage-path: my_share/path/on/filer
  name: mypvc
spec:
  storageClassName: my_filer_sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

In this way the provisioner, which AFAIK is not meant for using existing shares, creates/deletes the corresponding nfs path into the ephemeral volume and not on the filer. Nevertheless as far as the path exists on the NFS server everithing works fine.

Obviously you need a separate storage class and provisioner for each NFS server/NAS filer but this shouldn't be an issue.

@vavdoshka
Copy link

thanks @ppenzo

Interestingly enough but with the last officially released version of the provisioner - v4.0.2 works for me without the "ephemeral volume" patch. The behavior is if there is no annotations.nfs.io/storage-path in the PVC then no namespaced directory gets created, the PV is mounted to the root. But for sure that will not work starting with the version 4.0.8 because of this change b8e2036. And it seems like "ephemeral volume" patch indeed will be the only option until some specific option is implemented probably.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 26, 2022
@MatthewJSalerno
Copy link

MatthewJSalerno commented Oct 16, 2022

Your (my) usecase does work with the cs-driver-nfs provisioner is you use the Static Provisioning config. Their docs are pretty rough, but there's a decent example here.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 15, 2022
@plsnotracking
Copy link

@ppenzo

      volumes:
      - emptyDir: {}
        name: nfs-client-root

About this, how did you end up updating the volumes when deploying using the helm chart? I don't see any values being exposed via the values.yaml

OR

if anyone has a better way to do this? Thanks.

@ppenzo
Copy link

ppenzo commented Aug 22, 2024

@plsnotracking, I did not use the helm chart at all for the deploy.

@plsnotracking
Copy link

From pointers provided by @ppenzo and @vavdoshka I was able to finally arrive to this solution. I used helm charts to deploy, using ArgoCD. Also the image used by this chart has 3 CVEs. I used another chart.

More info about used chart here: #330 (comment)

App.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nfs-provisioner
  namespace: argocd
  annotations:
    argocd.argoproj.io/resource.expiry: "1h"
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
  destination:
    namespace: nfs-provisioner
    name: target
  project: target
  sources:
    # Chart from Chart Repo
    - chart: nfs-subdir-external-provisioner
      repoURL: https://starttoaster.github.io/nfs-subdir-external-provisioner/
      targetRevision: 4.0.20
      helm:
        valueFiles:
        - $values/path/to/nfs-provisioner/values.yaml
    # Values from Git
    - repoURL: <repo>
      targetRevision: HEAD
      ref: values
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true
      - SkipDryRunOnMissingResource=true

values.yaml

nfs:
  server: <ip>
  path: <addr>
  mountOptions:
  volumeName: nfs-fancy-name

# For creating the StorageClass automatically:
storageClass:
  # Set a StorageClass name
  # Ignored if storageClass.create is false
  name: nfs-fancy-name

  # Method used to reclaim an obsoleted volume
  reclaimPolicy: Delete

  # Specifies a template for creating a directory path via PVC metadata's such as labels, annotations, name or namespace.
  # Ignored if value not set.
  pathPattern: ${.PVC.annotations.nfs.io/storage-path}

  # Set access mode - ReadWriteOnce, ReadOnlyMany or ReadWriteMany
  accessModes: ReadWriteMany

  # Set volume bindinng mode - Immediate or WaitForFirstConsumer
  volumeBindingMode: Immediate

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: my-namespace
  name: my-pvc
  annotations:
    nfs.io/storage-path: <path/on/nfs/share>
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: nfs-fancy-name

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants