Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cinder-csi-plugin] filesystem of a larger volume created from a snapshot/volume is not expanded #1539

Closed
alibo opened this issue May 23, 2021 · 21 comments · Fixed by #1563
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@alibo
Copy link
Contributor

alibo commented May 23, 2021

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

If a new PVC is created from a Snapshot or another volume and its requested size is greater than the source volume/snapshot, its filesystem won't be resized.

What you expected to happen:

The filesystem of the new/dest volume should be expanded up to block device size.

How to reproduce it:

Source PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: source-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: my-storage-class

Cloned PVC + increased its requested storage:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cloned-pvc
spec:
  accessModes:
  - ReadWriteOnce
  dataSource:
    kind: PersistentVolumeClaim
    name: source-pvc
  resources:
    requests:
      storage: 2Gi # <-- increased from 1G to 2G
  storageClassName: my-storage-class

Anything else we need to know?:

It can be expanded by increasing the requested storage of the cloned PVC after it's created, but I think the easier option for a normal k8s user is expanding it in the NodePublishVolume or NodeStageVolume method implicitly if it's required.

Environment:

  • cinder-csi-plugin: version: 1.20
  • OpenStack version: Rocky
  • K8s/Openshift: 1.20/4.7
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 23, 2021
@ramineni
Copy link
Contributor

@alibo please share the logs after enabling verbose to 5 . Add --v=5 here https://github.com/kubernetes/cloud-provider-openstack/blob/master/manifests/cinder-csi-plugin/cinder-csi-controllerplugin.yaml#L106 .

We do pass the requested larger size in pvc to create volume call, so the same should be passed to cinder backend . Ideally should create of larger size if supported . Do you see any errors ?

@alibo
Copy link
Contributor Author

alibo commented May 24, 2021

We do pass the requested larger size in pvc to create volume call, so the same should be passed to cinder backend . Ideally should create of larger size if supported . Do you see any errors ?

It's true, the cinder creates a larger volume and when it's attached to the node it has the right size when I check it with lsblk.

But when it's mounted by cinder-csi-plugin, its filesystem (ext4 in this case) is not resized by resize2fs (I've checked it with df -h inside both node and pod and it still has the size of the source volume)

The reason is NodeExpandVolume is not called by kubelet since both PV and PVC of the new volume have the correct size.

@alibo
Copy link
Contributor Author

alibo commented May 24, 2021

I can also share the logs if it's needed.

@jichenjc
Copy link
Contributor

yes, logs should be helpful

a general question is The reason is NodeExpandVolume is not called by kubelet since both PV and PVC of the new volume have the correct size. if this is true, then should this be a generic issue to kubernetes code, not CPO CSI independent?

@ramineni
Copy link
Contributor

We do pass the requested larger size in pvc to create volume call, so the same should be passed to cinder backend . Ideally should create of larger size if supported . Do you see any errors ?

It's true, the cinder creates a larger volume and when it's attached to the node it has the right size when I check it with lsblk.

But when it's mounted by cinder-csi-plugin, its filesystem (ext4 in this case) is not resized by resize2fs (I've checked it with df -h inside both node and pod and it still has the size of the source volume)

The reason is NodeExpandVolume is not called by kubelet since both PV and PVC of the new volume have the correct size.

@alibo I'm not sure I understood the problem correctly. Cloned volume is altogether new volume with increased size already(not the existing volume where size is changed ), volume expansion needed if size needs to be changed after creation, which is not the case here. so there shouldnt be need for NodeExpandVolume to called as the new volume created is of larger size and the same can be attached to node without need of NodeExpandVolume.

@alibo
Copy link
Contributor Author

alibo commented May 25, 2021

then should this be a generic issue to kubernetes code, not CPO CSI independent?

@jichenjc IMHO, it depends on how we see the issue:

  1. If we consider it a generic issue in all CSI drivers, it can be resolved whether in CSI components (external-provisioner or external-resizer) or Kubelet itself.
  2. But if we consider it as an implementation-specific issue (in this case, cinder doesn't resize the filesystem itself (which is reasonable to me, it only cares about block-level resizing not filesystem-level resizing) then it can be addressed and resolved in the driver itself.
  3. Don't allow users to create a volume for another volume or a snapshot with a larger size. (it can be implemented in CSI components, cinder-csi-plugin or even k8s api server)

I think the maintainers of this project can decide :)


@ramineni Ok, let me give you an example and share the related logs:

  1. Let's assume we already created a PVC and the volume is also created in cinder (1G - everything is normal!)

PVC: (1G)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"source-pvc","namespace":"monitoring"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"block-storage-standard"}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: cinder.csi.openstack.org
  creationTimestamp: "2021-05-25T06:10:49Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: source-pvc
  namespace: monitoring
  resourceVersion: "37061644"
  selfLink: /api/v1/namespaces/monitoring/persistentvolumeclaims/source-pvc
  uid: 637f9cf2-0abd-4924-bf46-4d6bc344a509
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: block-storage-standard
  volumeMode: Filesystem
  volumeName: pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: cinder.csi.openstack.org
  creationTimestamp: "2021-05-25T06:10:50Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
  resourceVersion: "37061641"
  selfLink: /api/v1/persistentvolumes/pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
  uid: e0df3c80-5d22-4eeb-8210-a34f4667b8fe
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: source-pvc
    namespace: monitoring
    resourceVersion: "37061613"
    uid: 637f9cf2-0abd-4924-bf46-4d6bc344a509
  csi:
    driver: cinder.csi.openstack.org
    fsType: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: 1621922914419-8081-cinder.csi.openstack.org
    volumeHandle: 7b4d2908-b022-47e2-91ef-4429fde91f26
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.cinder.csi.openstack.org/zone
          operator: In
          values:
          - nova
  persistentVolumeReclaimPolicy: Delete
  storageClassName: block-storage-standard
  volumeMode: Filesystem
status:
  phase: Bound

Cinder:

openstack volume show pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509
+--------------------------------+---------------------------------------------------------+
| Field                          | Value                                                   |
+--------------------------------+---------------------------------------------------------+
| attachments                    | []                                                      |
| availability_zone              | nova                                                    |
| bootable                       | false                                                   |
| consistencygroup_id            | None                                                    |
| created_at                     | 2021-05-25T06:10:50.000000                              |
| description                    | Created by OpenStack Cinder CSI driver                  |
| encrypted                      | False                                                   |
| id                             | 7b4d2908-b022-47e2-91ef-4429fde91f26                    |
| migration_status               | None                                                    |
| multiattach                    | False                                                   |
| name                           | pvc-637f9cf2-0abd-4924-bf46-4d6bc344a509                |
| os-vol-host-attr:host          | hostgroup@okd-replica-3-standard#okd-replica-3-standard |
| os-vol-mig-status-attr:migstat | None                                                    |
| os-vol-mig-status-attr:name_id | None                                                    |
| os-vol-tenant-attr:tenant_id   | bff078a1683d405d8e3f27d22d26df12                        |
| properties                     | cinder.csi.openstack.org/cluster='kubernetes'           |
| replication_status             | None                                                    |
| size                           | 1                                                       |
| snapshot_id                    | None                                                    |
| source_volid                   | None                                                    |
| status                         | available                                               |
| type                           | okd-block-storage-standard                              |
| updated_at                     | 2021-05-25T06:10:50.000000                              |
| user_id                        | c92a8b81a2fc445081d40b94384e0cd0                        |
+--------------------------------+---------------------------------------------------------+
  1. We use it in a pod and write a sample file (the volume is raw and it should be formatted otherwise the cloned volume will be formatted since it's still a raw disk and it would have the correct size after it's formatted. In other words, we can't reproduce the issue!)
/prometheus $ df -h | grep /dev/vdf
/dev/vdf                975.9M      2.5M    957.4M   0% /prometheus

/prometheus $ echo "test" > /prometheus/test.txt
/prometheus $ cat /prometheus/test.txt
test
  1. Now we clone the pvc with more requested storage (2G):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cloned-pvc
spec:
  accessModes:
  - ReadWriteOnce
  dataSource:
    kind: PersistentVolumeClaim
    name: source-pvc
  resources:
    requests:
      storage: 2Gi
  storageClassName: block-storage-standard
  1. The new volume is created with 2G in size:

PVC (2g)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"cloned-pvc","namespace":"monitoring"},"spec":{"accessModes":["ReadWriteOnce"],"dataSource":{"kind":"PersistentVolumeClaim","name":"source-pvc"},"resources":{"requests":{"storage":"2Gi"}},"storageClassName":"block-storage-standard"}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: cinder.csi.openstack.org
  creationTimestamp: "2021-05-25T10:26:55Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: cloned-pvc
  namespace: monitoring
  resourceVersion: "37196636"
  selfLink: /api/v1/namespaces/monitoring/persistentvolumeclaims/cloned-pvc
  uid: 4f5fe6f8-e2ae-49a9-8767-e613b4728982
spec:
  accessModes:
  - ReadWriteOnce
  dataSource:
    apiGroup: null
    kind: PersistentVolumeClaim
    name: source-pvc
  resources:
    requests:
      storage: 2Gi
  storageClassName: block-storage-standard
  volumeMode: Filesystem
  volumeName: pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
  phase: Bound

PV (2g)

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: cinder.csi.openstack.org
  creationTimestamp: "2021-05-25T10:26:59Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
  resourceVersion: "37196634"
  selfLink: /api/v1/persistentvolumes/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
  uid: ed5637ba-125d-4e80-a26b-ec9935ae61bc
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 2Gi
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: cloned-pvc
    namespace: monitoring
    resourceVersion: "37196597"
    uid: 4f5fe6f8-e2ae-49a9-8767-e613b4728982
  csi:
    driver: cinder.csi.openstack.org
    fsType: ext4
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: 1621937257931-8081-cinder.csi.openstack.org
    volumeHandle: 17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.cinder.csi.openstack.org/zone
          operator: In
          values:
          - nova
  persistentVolumeReclaimPolicy: Delete
  storageClassName: block-storage-standard
  volumeMode: Filesystem
status:
  phase: Bound

Cinder:

+--------------------------------+---------------------------------------------------------+
| Field                          | Value                                                   |
+--------------------------------+---------------------------------------------------------+
| attachments                    | []                                                      |
| availability_zone              | nova                                                    |
| bootable                       | false                                                   |
| consistencygroup_id            | None                                                    |
| created_at                     | 2021-05-25T10:26:58.000000                              |
| description                    | Created by OpenStack Cinder CSI driver                  |
| encrypted                      | False                                                   |
| id                             | 17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4                    |
| migration_status               | None                                                    |
| multiattach                    | False                                                   |
| name                           | pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982                |
| os-vol-host-attr:host          | hostgroup@okd-replica-3-standard#okd-replica-3-standard |
| os-vol-mig-status-attr:migstat | None                                                    |
| os-vol-mig-status-attr:name_id | None                                                    |
| os-vol-tenant-attr:tenant_id   | bff078a1683d405d8e3f27d22d26df12                        |
| properties                     | cinder.csi.openstack.org/cluster='kubernetes'           |
| replication_status             | None                                                    |
| size                           | 2                                                       |
| snapshot_id                    | None                                                    |
| source_volid                   | 7b4d2908-b022-47e2-91ef-4429fde91f26                    |
| status                         | available                                               |
| type                           | okd-block-storage-standard                              |
| updated_at                     | 2021-05-25T10:26:59.000000                              |
| user_id                        | c92a8b81a2fc445081d40b94384e0cd0                        |
+--------------------------------+---------------------------------------------------------+

csi-external-provisioner logs:

I0525 10:26:55.503759       1 controller.go:1332] provision "monitoring/cloned-pvc" class "block-storage-standard": started
I0525 10:26:55.504274       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"monitoring", Name:"cloned-pvc", UID:"4f5fe6f8-e2ae-49a9-8767-e613b4728982", APIVersion:"v1", ResourceVersion:"37196597", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "monitoring/cloned-pvc"
I0525 10:26:59.040417       1 controller.go:1439] provision "monitoring/cloned-pvc" class "block-storage-standard": volume "pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" provisioned
I0525 10:26:59.040464       1 controller.go:1456] provision "monitoring/cloned-pvc" class "block-storage-standard": succeeded
I0525 10:26:59.060545       1 controller.go:1332] provision "monitoring/cloned-pvc" class "block-storage-standard": started
I0525 10:26:59.060585       1 controller.go:1341] provision "monitoring/cloned-pvc" class "block-storage-standard": persistentvolume "pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" already exists, skipping
I0525 10:26:59.060600       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"monitoring", Name:"cloned-pvc", UID:"4f5fe6f8-e2ae-49a9-8767-e613b4728982", APIVersion:"v1", ResourceVersion:"37196597", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982

cinder-csi-plugin controller logs:

I0525 10:26:55.513919       1 utils.go:100] GRPC call: /csi.v1.Controller/CreateVolume
I0525 10:26:55.513977       1 utils.go:101] GRPC request: name:"pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" capacity_range:<required_bytes:2147483648 > volume_capabilities:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > parameters:<key:"csi.storage.k8s.io/pv/name" value:"pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982" > parameters:<key:"csi.storage.k8s.io/pvc/name" value:"cloned-pvc" > parameters:<key:"csi.storage.k8s.io/pvc/namespace" value:"monitoring" > parameters:<key:"type" value:"okd-block-storage-standard" > volume_content_source:<volume:<volume_id:"7b4d2908-b022-47e2-91ef-4429fde91f26" > > accessibility_requirements:<requisite:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > preferred:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > >
I0525 10:26:55.514208       1 controllerserver.go:44] CreateVolume: called with args {Name:pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 CapacityRange:required_bytes:2147483648  VolumeCapabilities:[mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[csi.storage.k8s.io/pv/name:pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 csi.storage.k8s.io/pvc/name/cloned-pvc csi.storage.k8s.io/pvc/namespace:monitoring type:okd-block-storage-standard] Secrets:map[] VolumeContentSource:volume:<volume_id:"7b4d2908-b022-47e2-91ef-4429fde91f26" >  AccessibilityRequirements:requisite:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > preferred:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:26:59.039580       1 controllerserver.go:135] CreateVolume: Successfully created volume 17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 in Availability Zone: nova of size 2 GiB
I0525 10:26:59.039666       1 utils.go:106] GRPC response: volume:<capacity_bytes:2147483648 volume_id:"17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4" content_source:<volume:<volume_id:"7b4d2908-b022-47e2-91ef-4429fde91f26" > > accessible_topology:<segments:<key:"topology.cinder.csi.openstack.org/zone" value:"nova" > > >
  1. Now we replace the source-pvc with cloned-pvc in pod's spec

df -h inside pod:

/prometheus $ df -h | grep vdf
/dev/vdf                975.9M      2.5M    957.4M   0% /prometheus
/prometheus $ cat /prometheus/test.txt
test

Cinder:

openstack volume show pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 -c status -c size -c name
+--------+------------------------------------------+
| Field  | Value                                    |
+--------+------------------------------------------+
| name   | pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982 |
| size   | 2                                        |
| status | in-use                                   |
+--------+------------------------------------------+

Inside node:

sudo df -h | grep pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
/dev/vdf        976M  2.6M  958M   1% /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount

sudo lsblk | grep pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982
vdf    252:80   0    2G  0 disk /var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount

cinder-csi-plugin node server logs:

I0525 10:33:55.750287       1 utils.go:100] GRPC call: /csi.v1.Node/NodeStageVolume
I0525 10:33:55.750319       1 utils.go:101] GRPC request: volume_id:"17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4" publish_context:<key:"DevicePath" value:"/dev/vdf" > staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1621937257931-8081-cinder.csi.openstack.org" >
I0525 10:33:55.750444       1 nodeserver.go:336] NodeStageVolume: called with args {VolumeId:17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 PublishContext:map[DevicePath:/dev/vdf] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1621937257931-8081-cinder.csi.openstack.org] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:33:55.916020       1 utils.go:100] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0525 10:33:55.916066       1 utils.go:101] GRPC request:
I0525 10:33:55.916146       1 nodeserver.go:454] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0525 10:33:55.916169       1 utils.go:106] GRPC response: capabilities:<rpc:<type:STAGE_UNSTAGE_VOLUME > > capabilities:<rpc:<type:EXPAND_VOLUME > > capabilities:<rpc:<type:GET_VOLUME_STATS > >
I0525 10:33:55.917564       1 utils.go:100] GRPC call: /csi.v1.Node/NodeGetVolumeStats
I0525 10:33:55.917590       1 utils.go:101] GRPC request: volume_id:"f0b9b361-8e01-4e10-8888-e07a2f1c5a76" volume_path:"/var/lib/kubelet/pods/08b39976-4c6c-4a89-8bf3-ecc025a146fa/volumes/kubernetes.io~csi/pvc-4bc15865-c68f-4269-9bbb-ec10f5e9f4ab/mount"
I0525 10:33:55.917648       1 nodeserver.go:462] NodeGetVolumeStats: called with args {VolumeId:f0b9b361-8e01-4e10-8888-e07a2f1c5a76 VolumePath:/var/lib/kubelet/pods/08b39976-4c6c-4a89-8bf3-ecc025a146fa/volumes/kubernetes.io~csi/pvc-4bc15865-c68f-4269-9bbb-ec10f5e9f4ab/mount StagingTargetPath: XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:33:55.917717       1 utils.go:106] GRPC response: usage:<available:1003843584 total:1023303680 used:2682880 unit:BYTES > usage:<available:65513 total:65536 used:23 unit:INODES >
I0525 10:33:56.157852       1 mount.go:171] Found disk attached as "virtio-17fb58ba-bcf0-4dc2-b"; full devicepath: /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b
I0525 10:33:56.157970       1 mount_linux.go:405] Attempting to determine if disk "/dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b])
I0525 10:33:56.221151       1 mount_linux.go:408] Output: "DEVNAME=/dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b\nTYPE=ext4\n", err: <nil>
I0525 10:33:56.221230       1 mount_linux.go:298] Checking for issues with fsck on disk: /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b
I0525 10:33:56.580565       1 mount_linux.go:394] Attempting to mount disk /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount
I0525 10:33:56.580694       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t ext4 -o defaults /dev/disk/by-id/virtio-17fb58ba-bcf0-4dc2-b /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount)
I0525 10:33:56.634902       1 utils.go:106] GRPC response:
I0525 10:33:56.964651       1 utils.go:100] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0525 10:33:56.964690       1 utils.go:101] GRPC request:
I0525 10:33:56.964743       1 nodeserver.go:454] NodeGetCapabilities called with req: &csi.NodeGetCapabilitiesRequest{XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}
I0525 10:33:56.964766       1 utils.go:106] GRPC response: capabilities:<rpc:<type:STAGE_UNSTAGE_VOLUME > > capabilities:<rpc:<type:EXPAND_VOLUME > > capabilities:<rpc:<type:GET_VOLUME_STATS > >
I0525 10:33:56.975079       1 utils.go:100] GRPC call: /csi.v1.Node/NodePublishVolume
I0525 10:33:56.975142       1 utils.go:101] GRPC request: volume_id:"17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4" publish_context:<key:"DevicePath" value:"/dev/vdf" > staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount" target_path:"/var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1621937257931-8081-cinder.csi.openstack.org" >
I0525 10:33:56.975335       1 nodeserver.go:50] NodePublishVolume: called with args {VolumeId:17fb58ba-bcf0-4dc2-bd11-1aa21cd996c4 PublishContext:map[DevicePath:/dev/vdf] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount TargetPath:/var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1621937257931-8081-cinder.csi.openstack.org] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0525 10:33:57.356742       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount /var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount)
I0525 10:33:57.360361       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t ext4 -o bind,remount,rw /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/globalmount /var/lib/kubelet/pods/f4e7ca68-9608-486d-9ff3-3e17e6dda990/volumes/kubernetes.io~csi/pvc-4f5fe6f8-e2ae-49a9-8767-e613b4728982/mount)

Please, let me know if you need any more information :)

@lingxiankong
Copy link
Contributor

lingxiankong commented May 25, 2021

Now we replace the source-pvc with cloned-pvc in pod's spec

I think this is the missing part from the original description.

@ramineni @jichenjc we may need to walk through the process here in cinder csi and see if the resizefs is skipped for some reason.

@m-yosefpor
Copy link
Contributor

TLDR;

  1. create a 1Gi pvc-a -> 1 Gi block, 1 Gi Filesystem
  2. create a 2 Gi pvc-b cloned from pvc-a -> 2 Gi block, 1 Gi Filesystem

Someone with node access should run reize2fs manually in such cases, which is not desired at all.

@lingxiankong
Copy link
Contributor

TLDR;

  1. create a 1Gi pvc-a -> 1 Gi block, 1 Gi Filesystem
  2. create a 2 Gi pvc-b cloned from pvc-a -> 2 Gi block, 1 Gi Filesystem

Someone with node access should run reize2fs manually in such cases, which is not desired at all.

@m-yosefpor You probably didn't read my comment above.

@alibo
Copy link
Contributor Author

alibo commented May 26, 2021

Now we replace the source-pvc with cloned-pvc in pod's spec

I think this is the missing part from the original description.

@lingxiankong I just changed the name of pvc in volumes section and restarted the pod:

      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: cloned-pvc # <-- it was `source-pvc`

But the issue is here:

https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/volume/util/operationexecutor/operation_generator.go#L1740-L1743

Kubelet doesn't know the new volume is created from another volume with a different size, so it just compares the requested storage in the spec of both PV and PVC. As you can see in the example I've mentioned, both PV and PVC have the same size, so It doesn't call NodeExpandVolume during mounting the volume.

@alibo
Copy link
Contributor Author

alibo commented May 26, 2021

One solution is setting the requested storage of new PV to the size of the source PV in external-provisioner, so it can be expanded after it's mounted:

https://github.com/kubernetes-csi/external-provisioner/blob/1b9d55dd8c7d8c076abeea1039d261723e2d6efa/pkg/controller/controller.go#L812-L814

But it sounds like a hacky workaround to me since the requested storage in PV should reflect the size of the block volume not the filesystem.

@alibo
Copy link
Contributor Author

alibo commented May 26, 2021

CSI Spec says csi driver should decide whether to reject the request or resize the filesytem by itself:

container-storage-interface/spec#452

If CO requests a volume to be created from existing snapshot or volume and the requested size of the volume is larger than the original snapshotted (or cloned volume), the Plugin can either refuse such a call with OUT_OF_RANGE error or MUST provide a volume that, when presented to a workload by NodePublish call, has both the requested (larger) size and contains data from the snapshot (or original volume). Explicitly, it's the responsibility of the Plugin to resize the filesystem of the newly created volume at (or before) the NodePublish call, if the volume has VolumeCapability access type MountVolume and the filesystem resize is required in order to provision the requested capacity.

https://github.com/container-storage-interface/spec/blob/master/spec.md#controller-service-rpc

@jichenjc
Copy link
Contributor

@alibo
what's the action of your replacement of clone pvc to old pvc?

I tried to replace the pod directly, but got error

                                ... // 7 identical fields
                                ISCSI:     nil,
                                Glusterfs: nil,
                                PersistentVolumeClaim: &core.PersistentVolumeClaimVolumeSource{
-                                       ClaimName: "cloned-pvc",
+                                       ClaimName: "csi-pvc-cinderplugin",
                                        ReadOnly:  false,


The Pod "nginx" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)

then I delete the pod and recreated a new one

# kubectl apply -f t.yaml
persistentvolumeclaim/cloned-pvc unchanged
pod/nginx created

weird thing happend, I got 2 G at first try

root@nginx:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          79G   64G   13G  84% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vda1        79G   64G   13G  84% /etc/hosts
shm              64M     0   64M   0% /dev/shm
/dev/vdh        2.0G  6.0M  1.9G   1% /var/lib/www/html

then keep getting 1G after first try ( delete everything and redo )

root@nginx:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay          79G   64G   13G  84% /
tmpfs            64M     0   64M   0% /dev
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vda1        79G   64G   13G  84% /etc/hosts
shm              64M     0   64M   0% /dev/shm
/dev/vdh        976M  2.6M  958M   1% /var/lib/www/html

and from the CSI spec, seems we need fix this in our driver ,need test more and provide additional comments

@alibo
Copy link
Contributor Author

alibo commented May 28, 2021

I tried to replace the pod directly, but got error

@jichenjc I've changed the pod's spec in the deployment.

weird thing happend, I got 2 G at first try

if the partition of the source volume is not created (it usually happens when the pod is not created yet (for the first time), see # 2 in my example scenario). When it's cloned, the driver realizes the partition is not created and runs mkfs.ext4 so the filesystem would have the same size that block volume has:

@jichenjc
Copy link
Contributor

ok, I can reproduce what you mentioned above by using deployment (previously I am using pod)

root@jjtest1:~/go/src/github.com/cloud-provider-openstack/examples/cinder-csi-plugin# sudo df -h | grep pvc-0aef6f91-0fab-4413-8654-437ad3a3b340
/dev/vdi            976M  2.6M  958M   1% /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-0aef6f91-0fab-4413-8654-437ad3a3b340/globalmount
root@jjtest1:~/go/src/github.com/cloud-provider-openstack/examples/cinder-csi-plugin# sudo lsblk | grep pvc-0aef6f91-0fab-4413-8654-437ad3a3b340
vdi    252:128  0   2G  0 disk /var/lib/kubelet/pods/fe355d4c-e8a9-4cdf-95f8-414a5fc6aae2/volumes/kubernetes.io~csi/pvc-0aef6f91-0fab-4413-8654-437ad3a3b340/mount

as you pointed out
https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/volume/util/operationexecutor/operation_generator.go#L1740-L1743

is checking
pvc.Status.Capacity[v1.ResourceStorage] and pv.Spec.Capacity[v1.ResourceStorage]
in order to call expand, CPO CSI can decide whether reject such or honor the change but
it can't control whether the expand action is called or not

as pv is definitely comes from cinder and in our sample it's 2G
another way is to set pvc status to 1G before it's expanded in FS layer , but not sure whether we can achieve that

@alibo
Copy link
Contributor Author

alibo commented May 31, 2021

another way is to set pvc status to 1G before it's expanded in FS layer , but not sure whether we can achieve that

@jichenjc Seems PV controller in Kubernetes overwrites it during binding:
kubernetes/kubernetes#94929 (comment)

In the above issue, he said they're planning to fix the issue in CSI drivers using the new mount-utils package (it's already imported in #1341 and #1440):
https://github.com/kubernetes/mount-utils/blob/master/resizefs_linux.go

the merged PR in aws-ebs-csi-driver which fixes this issue:
kubernetes-sigs/aws-ebs-csi-driver#753

it resizes the volume in NodeStageVolume if it's needed using similar codes that also exist in mount-utils pkg

  • File issues in all community CSI drivers that we care about & fix them.

@jichenjc
Copy link
Contributor

jichenjc commented Jun 1, 2021

thanks for the detailed info , this is really helpful ~ @alibo
I went through all the issues above and seems kubernetes-sigs/aws-ebs-csi-driver#753
is the right approach for us? NodeStage phase do the resize especially at line
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/753/files#diff-4f4d9c2e0ec5c4d3b4ac2d79abd70ee21bacff65ba05c059396921996dbf6607R226 ?

@alibo
Copy link
Contributor Author

alibo commented Jun 1, 2021

@jichenjc We've added to NodePublishVolume in our forked repository, but NodeStageVolume would be a better place since it has access to a valid disk path and it doesn't need to run findmnt:

https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/csi/cinder/nodeserver.go#L378-L382

One thing I suggest considering is unlike aws-ebs-csi driver, you may want to resize the volume if the volume is only created from another volume or snapshot:

volume, err := ns.Cloud.GetVolume(volumeID)
if err != nil {
	// TODO
	return err
}

if volume.SourceVolID != nil || volume.SnapshotID != nil {
	r := mountutil.NewResizeFs(ns.Mount.Mounter().Exec)

	if needResize, err := r.NeedResize(devicePath, stagingTarget); !needResize || err != nil {
		// TODO
		return err
	}

	if _, err := r.Resize(devicePath, stagingTarget); err != nil {
		// TODO
		return err
	}

}

In this case, the driver won't resize the volume before the NodeExpandVolume call, when the user itself increased the requested storage in PVC (normal volume expansion).

@jichenjc
Copy link
Contributor

jichenjc commented Jun 4, 2021

make sense to me , let's see whether anyone want to submit a PR then we can continue discussion there or I can submit one after my recent busy days

@alibo
Copy link
Contributor Author

alibo commented Jun 4, 2021

make sense to me , let's see whether anyone want to submit a PR then we can continue discussion there or I can submit one after my recent busy days

@jichenjc I've submitted a PR for it: #1563

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
6 participants