Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Trident stores incorrect volume count for backend #111

Closed
si-heger opened this issue Apr 13, 2018 · 1 comment
Closed

Bug: Trident stores incorrect volume count for backend #111

si-heger opened this issue Apr 13, 2018 · 1 comment

Comments

@si-heger
Copy link

I have the following situation: Standard K8S cluster with trident 18.01.00 (client and server).

I created one backend with ONTAP 9.3 - svm1 which has access to 3 x aggregates (ssd, hybrid and hdd)

Here is the backend definition:
{ "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "172.20.xx", "dataLIF": "172.20.xx", "svm": "svm1", "username": "vsadmin", "password": "xxx", "storagePrefix": "k8s", "defaults": { "snapshotPolicy": "k8s-snap-silver-policy" } }

Then I created two storage classes silver and gold for hdd and ssd aggregates (media: "ssd" and media: "hdd"). When checking the details in trident with tridentctl -n trident get backend -o json I can see that the silver storageclass is bound to the the hdd aggregate and the gold storageclass to the ssd aggregate.
Afterwards I create two PVCs in K8S - one silver and one gold class and they are getting created accordingly.

So far so good. Now I create a svm2 on the same ONTAP cluster that has access to the same 3 x aggregates and in trident I create a second backend that points to this svm. Here is the config:

{ "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "172.20.zz", "dataLIF": "172.20.zz", "svm": "svm2", "username": "vsadmin", "password": "xx", "storagePrefix": "sec" }

Now I just look at the result:

`[root@sheger-k8s-node4 setup]# tridentctl -n trident get volume
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+
| NAME | SIZE | STORAGE CLASS | PROTOCOL | BACKEND | POOL |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+

| default-sample-volume-silver-e2199 | 1.0 GiB | silver | file | ontapnas_172.20.xx | node2_data_sas |

| default-sample-volume-gold-7ccc5 | 1.0 GiB | gold | file | ontapnas_172.20.xx | node1_data_ssd |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+`

Still correct. But when I check the backend it show 4 x volumes

+-------------------------+----------------+--------+---------+
| NAME | STORAGE DRIVER | ONLINE | VOLUMES |
+-------------------------+----------------+--------+---------+
| ontapnas_172.20.xx | ontap-nas | true | 2 |
| ontapnas_172.20.zz | ontap-nas | true | 2 |
+-------------------------+----------------+--------+---------+

Why is the second backend showing 2 volumes? When checking the details it lists the same volumes which are one a different backend, different svm and a have different prefix?

When I delete one of the volumes it gets worse. It is correctly deleted from the first backend with the svm1 but it still stays visible for the second backend. So it stays there forever and it is not possible to clean it up.

`[root@sheger-k8s-node4 setup]# tridentctl -n trident get backend
+-------------------------+----------------+--------+---------+
| NAME | STORAGE DRIVER | ONLINE | VOLUMES |
+-------------------------+----------------+--------+---------+

| ontapnas_172.20.zz | ontap-nas | true | 2 |

| ontapnas_172.20.xx | ontap-nas | true | 1 |

+-------------------------+----------------+--------+---------+

[root@sheger-k8s-node4 setup]# tridentctl -n trident get volume
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+
| NAME | SIZE | STORAGE CLASS | PROTOCOL | BACKEND | POOL |
+------------------------------------+---------+----------------------+----------+-----------------------+----------------+

| default-sample-volume-silver-e2199 | 1.0 GiB | silver | file | ontapnas_172.20.xx | node2_data_sas |

+------------------------------------+---------+----------------------+----------+-----------------------+----------------+`

@kangarlou
Copy link
Contributor

@si-heger Thanks for creating this issue! We have already fixed this bug. You should see the correct behavior in the 18.04 release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants