Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export-policy rule index 1 is not updated #391

Closed
ysakashita opened this issue May 1, 2020 · 4 comments
Closed

export-policy rule index 1 is not updated #391

ysakashita opened this issue May 1, 2020 · 4 comments

Comments

@ysakashita
Copy link

ysakashita commented May 1, 2020

Describe the bug

In the backend of the ontap-nas driver and dynamic export policies enabled, all Kubernetrs's nodes were recreated by rolling update.
As a result, the the old node's IP remains at the export-policy rule index 1 in ONTAP.

Environment
Provide accurate information about the environment to help us reproduce the issue.

  • Trident version: 20.04.0
  • Trident installation flags used: -n trident
  • Container runtime: Docker 19.03.8
  • Kubernetes version: 1.17.5
  • Kubernetes orchestrator: non
  • Kubernetes enabled feature gates: non
  • OS: Ubuntu 18.04
  • NetApp backend types: ONTAP AFF 9.1P14
  • Other: none

To Reproduce

(Maybe) you can reproduce the bug.
I show our test steps.

  1. Setup NetApp/Trident v20.04.0 and dynamic export policies
  2. Check Kubernetes Node's IP

(e.g) 5 Nodes in Kubernets

$ kubectl get nodes  -o custom-columns='NAME:.metadata.name,IP:.status.addresses'
NAME                                            IP
demo-ysaka-2004-ingress-default-87f591-lnnm2    [map[address:10.30.64.122 type:InternalIP] map[address:demo-ysaka-2004-ingress-default-87f591-lnnm2 type:Hostname]]
demo-ysaka-2004-master-default-513dd026-r4dst   [map[address:10.30.64.74 type:InternalIP] map[address:demo-ysaka-2004-master-default-513dd026-r4dst type:Hostname]]
demo-ysaka-2004-worker-default-a6179107-28kkg   [map[address:10.30.64.234 type:InternalIP] map[address:demo-ysaka-2004-worker-default-a6179107-28kkg type:Hostname]]
demo-ysaka-2004-worker-default-a6179107-7jmn7   [map[address:10.30.64.148 type:InternalIP] map[address:demo-ysaka-2004-worker-default-a6179107-7jmn7 type:Hostname]]
demo-ysaka-2004-worker-default-a6179107-mzwl2   [map[address:10.30.64.89 type:InternalIP] map[address:demo-ysaka-2004-worker-default-a6179107-mzwl2 type:Hostname]]
  1. Check export-policy rule in ONTAP

(e.g) 5 entries in export-policy rule

vs1::vserver export-policy rule> show trident-b25eca67-aac2-4089-af00-4df891798946
             Policy          Rule    Access   Client                RO
Vserver      Name            Index   Protocol Match                 Rule
------------ --------------- ------  -------- --------------------- ---------
vs1          trident-b25eca67-aac2-4089-af00-4df891798946 
                             1       nfs      10.30.64.89           any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946 
                             2       nfs      10.30.64.74           any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946 
                             3       nfs      10.30.64.148          any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946 
                             4       nfs      10.30.64.122          any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946 
                             5       nfs      10.30.64.234          any
5 entries were displayed.
  1. Recreate all nodes one by one(like rolling update)
  2. Check Kubernetes Node's IP

(e.g) New 5 Node in Kubernetes

$ kubectl get nodes  -o custom-columns='NAME:.metadata.name,IP:.status.addresses'
demo-ysaka-2004-ingress-default-e61b403f-kswxr   [map[address:10.30.65.1 type:InternalIP] map[address:demo-ysaka-2004-ingress-default-e61b403f-kswxr type:Hostname]]
demo-ysaka-2004-master-default-5a3104d8-tjfgc    [map[address:10.30.64.21 type:InternalIP] map[address:demo-ysaka-2004-master-default-5a3104d8-tjfgc type:Hostname]]
demo-ysaka-2004-worker-default-dd1bd1a7-lrw8d    [map[address:10.30.65.11 type:InternalIP] map[address:demo-ysaka-2004-worker-default-dd1bd1a7-lrw8d type:Hostname]]
demo-ysaka-2004-worker-default-dd1bd1a7-mjqf9    [map[address:10.30.65.18 type:InternalIP] map[address:demo-ysaka-2004-worker-default-dd1bd1a7-mjqf9 type:Hostname]]
demo-ysaka-2004-worker-default-dd1bd1a7-xglmr    [map[address:10.30.64.74 type:InternalIP] map[address:demo-ysaka-2004-worker-default-dd1bd1a7-xglmr type:Hostname]]
  1. Check export-policy rule in ONTAP

(e.g) 6 entries but number of nodes are 5. (Rule Index 1 remains)

vs1::vserver export-policy rule> show trident-b25eca67-aac2-4089-af00-4df891798946
             Policy          Rule    Access   Client                RO
Vserver      Name            Index   Protocol Match                 Rule
------------ --------------- ------  -------- --------------------- ---------
vs1          trident-b25eca67-aac2-4089-af00-4df891798946
                             1       nfs      10.30.64.89           any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946
                             6       nfs      10.30.65.18           any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946
                             7       nfs      10.30.65.1            any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946
                             8       nfs      10.30.64.21           any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946
                             9       nfs      10.30.65.11           any
vs1          trident-b25eca67-aac2-4089-af00-4df891798946
                             10      nfs      10.30.64.74           any
6 entries were displayed.

Expected behavior

Just the current node IPs are registered in the export-policy rule in ONTAP.

Additional context

None

@ysakashita ysakashita added the bug label May 1, 2020
@gnarl gnarl added the tracked label May 11, 2020
@gnarl
Copy link
Contributor

gnarl commented May 13, 2020

Hi @ysakashita,

The Trident pod running on each Node needs to be restarted. Trident is only scanning the available IPs when the pod starts. If you restart the pod then the existing rules should be updated.

@ysakashita
Copy link
Author

ysakashita commented May 13, 2020

@gnarl Is the Trident pod created from ds? If trident pod created from ds, We are recreating on new node. If you mean trident pod which is created from deployment, I believe that the trident should watch a node status in reconcile loop.
I think this bug is caused by the remaining old CR of tridentnodes.trident.netapp.io.
Because when I restarted the trident pod (from deployment), the old IPs still remain the export-policy rule.

@adkerr
Copy link
Contributor

adkerr commented May 14, 2020

If a CR was left behind (due to a missed node deletion event) you can manually remove it from trident's database via tridentctl node delete <node name> which should initiate a reconcile loop for the export policies

@ysakashita
Copy link
Author

@adkerr Thanks. I understood this solution. However I don't want to use the way.
Our company has 500 overs K8S clusters. So we want to delete old CR automatically as declarative way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants