Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using CSI to create a snapshot, the following print appears: Remove Finalizer for PVC gaozh-test as it is not used by snapshots in creation #401

Closed
fantastic2085 opened this issue Oct 19, 2020 · 10 comments
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@fantastic2085
Copy link

ceph version:14.2.10 ceph csi:3.0.0

The snapshot creation starts as follows:
image
But when the snapshot is complete, here is the picture
image

Here is the log information:
I1016 06:26:40.268336 1 util.go:147] storeObjectUpdate updating snapshot "syy/gaozh-test-snap2" with version 21641793
I1016 06:26:40.268349 1 snapshot_controller.go:157] synchronizing VolumeSnapshot[syy/gaozh-test-snap2]: bound to: "snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3", Completed: true
I1016 06:26:40.268356 1 snapshot_controller.go:159] syncSnapshot [syy/gaozh-test-snap2]: check if we should remove finalizer on snapshot PVC source and remove it if we can
I1016 06:26:40.268367 1 snapshot_controller.go:820] checkandRemovePVCFinalizer for snapshot [gaozh-test-snap2]: snapshot status [&v1beta1.VolumeSnapshotStatus{BoundVolumeSnapshotContentName:(*string)(0xc0005b6410), CreationTime:(*v1.Time)(0xc000646d20), ReadyToUse:(*bool)(0xc0004598a6), RestoreSize:(*resource.Quantity)(0xc00064a440), Error:(*v1beta1.VolumeSnapshotError)(nil)}]
I1016 06:26:40.268390 1 snapshot_controller.go:780] Checking isPVCBeingUsed for snapshot [syy/gaozh-test-snap2]
I1016 06:26:40.268412 1 snapshot_controller.go:801] isPVCBeingUsed: no snapshot is being created from PVC syy/gaozh-test
I1016 06:26:40.268417 1 snapshot_controller.go:828] checkandRemovePVCFinalizer[gaozh-test-snap2]: Remove Finalizer for PVC gaozh-test as it is not used by snapshots in creation
I1016 06:26:40.277868 1 snapshot_controller.go:774] Removed protection finalizer from persistent volume claim gaozh-test
I1016 06:26:40.277912 1 snapshot_controller.go:175] syncSnapshot[syy/gaozh-test-snap2]: check if we should add finalizers on snapshot
I1016 06:26:40.277927 1 snapshot_controller.go:202] processFinalizersAndCheckandDeleteContent: Content [snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3] deletion policy [Delete] is delete.
I1016 06:26:40.277939 1 snapshot_controller.go:209] syncSnapshot: VolumeSnapshot gaozh-test-snap2 is bound to volumeSnapshotContent [snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3]
I1016 06:26:40.277950 1 snapshot_controller.go:326] syncReadySnapshot[syy/gaozh-test-snap2]: VolumeSnapshotContent "snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3" found
I1016 06:27:02.220363 1 reflector.go:278] github.com/kubernetes-csi/external-snapshotter/pkg/client/informers/externalversions/factory.go:117: forcing resync
I1016 06:27:02.220605 1 snapshot_controller_base.go:190] enqueued "snapcontent-2d5f7d5a-256d-418c-abc4-672c5a313d61" for sync
I1016 06:27:02.220642 1 snapshot_controller_base.go:190] enqueued "snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3" for sync
I1016 06:27:02.220663 1 snapshot_controller_base.go:270] contentWorker[snapcontent-2d5f7d5a-256d-418c-abc4-672c5a313d61]
I1016 06:27:02.220692 1 util.go:147] storeObjectUpdate updating content "snapcontent-2d5f7d5a-256d-418c-abc4-672c5a313d61" with version 21638851
I1016 06:27:02.220698 1 snapshot_controller_base.go:270] contentWorker[snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3]
I1016 06:27:02.220731 1 util.go:147] storeObjectUpdate updating content "snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3" with version 21641792
I1016 06:27:02.220750 1 snapshot_controller.go:86] synchronizing VolumeSnapshotContent[snapcontent-52e3a3e8-63ff-4c3a-a1c6-507de8b25bf3]: content is bound to snapshot syy/gaozh-test-snap2
I1016 06:27:02.220764 1 snapshot_controller.go:1288] getSnapshotFromStore: snapshot syy/gaozh-test-snap2 found
I1016 06:27:02.220777 1 snapshot_controller.go:910] needsUpdateSnapshotStatus[syy/gaozh-test-snap2]
I1016 06:27:02.220788 1 snapshot_controller.go:120] synchronizing VolumeSnapshotContent for snapshot [syy/gaozh-test-snap2]: update snapshot status to true if needed.
I1016 06:27:02.220710 1 snapshot_controller.go:86] synchronizing VolumeSnapshotContent[snapcontent-2d5f7d5a-256d-418c-abc4-672c5a313d61]: content is bound to snapshot syy/gaozh-test-snap
I1016 06:27:02.220826 1 snapshot_controller.go:1288] getSnapshotFromStore: snapshot syy/gaozh-test-snap found
I1016 06:27:02.220839 1 snapshot_controller.go:910] needsUpdateSnapshotStatus[syy/gaozh-test-snap]
I1016 06:27:02.220849 1 snapshot_controller.go:120] synchronizing VolumeSnapshotContent for snapshot [syy/gaozh-test-snap]: update snapshot status to true if needed.
I1016 06:27:02.220865 1 snapshot_controller_base.go:205] snapshotWorker[syy/gaozh-test-snap]
I1016 06:27:02.220875 1 snapshot_controller_base.go:208] snapshotWorker: snapshot namespace [syy] name [gaozh-test-snap]
I1016 06:27:02.220888 1 snapshot_controller_base.go:327] checkAndUpdateSnapshotClass [gaozh-test-snap]: VolumeSnapshotClassName [179rbd]
I1016 06:27:02.220897 1 snapshot_controller.go:1086] getSnapshotClass: VolumeSnapshotClassName [179rbd]
I1016 06:27:02.220906 1 snapshot_controller_base.go:346] VolumeSnapshotClass [179rbd] Driver [rbd.csi.ceph.com]
I1016 06:27:02.220917 1 snapshot_controller_base.go:219] passed checkAndUpdateSnapshotClass for snapshot "syy/gaozh-test-snap"
I1016 06:27:02.220927 1 snapshot_controller_base.go:356] updateSnapshot "syy/gaozh-test-snap"
I1016 06:27:02.220941 1 util.go:147] storeObjectUpdate updating snapshot "syy/gaozh-test-snap" with version 21638852

Yaml file information:
apiVersion: storage.k8s.io/v1
kind: PersistentVolumeClaim
metadata:
name: gaozh-test
namespace: syy
selfLink: /api/v1/namespaces/syy/persistentvolumeclaims/gaozh-test
uid: c57c7291-efd0-484f-898b-48b465b2dda2
resourceVersion: '21657317'
creationTimestamp: '2020-10-16T06:19:42Z'
annotations:
pv.kubernetes.io/bind-completed: 'yes'
pv.kubernetes.io/bound-by-controller: 'yes'
volume.beta.kubernetes.io/storage-provisioner: rbd.csi.ceph.com
finalizers:
- kubernetes.io/pvc-protection
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: pvc-c57c7291-efd0-484f-898b-48b465b2dda2
storageClassName: 179rbd
volumeMode: Filesystem
status:
phase: Unused
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi

@xing-yang
Copy link
Collaborator

Thanks for the detailed report. This works as designed. We only keep the PVC finalizer when snapshot is being created from the PVC. Once snapshot is readyToUse, we no longer need the PVC finalizer.

@fantastic2085
Copy link
Author

Thanks for the detailed report. This works as designed. We only keep the PVC finalizer when snapshot is being created from the PVC. Once snapshot is readyToUse, we no longer need the PVC finalizer.

Are you saying that this phenomenon is reasonable?

@fantastic2085
Copy link
Author

If this makes sense, then I delete the volume first, then the snapshot associated with that volume, I think, should also be deleted, but that's not the case right now, the volume has been deleted, but the snapshot corresponding to that volume is still there

@fantastic2085
Copy link
Author

I hope you can give me a more detailed answer

@xing-yang
Copy link
Collaborator

See this issue here: #115 (comment)

There are storage systems whose snapshots have independent life cycles from volumes. For them, you can delete a volume which still has snapshots taken from them. There are other storage systems whose snapshots are dependent on the volumes. So deleting volumes which still have snapshots dependent on them will fail.

We need to have a way to handle this consistently. The recommendation is for CSI drivers to handle this internally. If snapshots are dependent on the volumes for a storage system, CSI driver should do a soft delete of the volume when it gets a request to delete the volume with snapshots and keep a ref count internally. When all snapshots are deleted, CSI driver should then delete that volume from the storage system.

Let me mark this as a Documentation issue and I'll add some content in the CSI driver doc on how to handle this.

@xing-yang xing-yang added the kind/documentation Categorizes issue or PR as related to documentation. label Oct 19, 2020
@xing-yang xing-yang assigned xing-yang and unassigned xing-yang Oct 19, 2020
@fantastic2085
Copy link
Author

Thanks for your patient answer. I mainly use K8S to connect ceph. From the perspective of Ceph, if the block devices created by CSI have snapshots associated with them, they cannot be deleted.If possible, I sincerely hope that you can explain this issue for Ceph

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 17, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants