Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'timeout expired waiting for volumes to attach/mount for pod when cluster' when node-vm-size is Standard_B1s #166

Closed
adam7 opened this issue Feb 6, 2018 · 24 comments
Labels

Comments

@adam7
Copy link

adam7 commented Feb 6, 2018

Issue
I'm trying to deploy mariadb to aks using helm but consistently hitting a 'Unable to mount volumes for pod ... timeout expired waiting for volumes to attach/mount for pod' error which stops the deployment dead.

This works without issue if the node-vm-size is set as the default, Standard_D1_v2, when I create the cluster.

@andyzhangx
Copy link
Contributor

@adam7 could you collect kubelet logs on that agent VM? Follow by:
https://github.com/andyzhangx/Demo/tree/master/debug#q-how-to-get-k8s-kubelet-logs-on-linux-agent

@andyzhangx
Copy link
Contributor

also, collect pod & pvc info:

kubectl describe po POD-NAME
kubectl describe pvc PVC-NAME

@adam7
Copy link
Author

adam7 commented Feb 6, 2018

@andyzhangx I've just deleted the cluster so I can try with B1MS nodes. If I see the same behaviour I'll collect the logs, if not I'll fire up another B1S cluster later and grab the logs from that

@adam7
Copy link
Author

adam7 commented Feb 6, 2018

@andyzhangx Trying to get the kubelet logs using this command

docker ps -a | grep "hyperkube kubelet" | awk -F ' ' '{print $1}'

fails with

Cannot connect to the Docker daemon. Is the docker daemon running on this host?

@adam7
Copy link
Author

adam7 commented Feb 6, 2018

kubectl describe po

Gives me:

Name: kasper-the-friendly-mariadb-7958b49759-dfpdn
Namespace: default
Node: aks-nodepool1-22287053-0/10.240.0.4
Start Time: Tue, 06 Feb 2018 08:38:55 +0100
Labels: app=kasper-the-friendly-mariadb
pod-template-hash=3514605315
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"kasper-the-friendly-mariadb-7958b49759","uid":"56b7351d-0a90-11e...
Status: Running
IP: 10.244.2.4
Controlled By: ReplicaSet/kasper-the-friendly-mariadb-7958b49759
Containers:
kasper-the-friendly-mariadb:
Container ID: docker://967aa99f12f799720a81f49ca1947976067a98cda03189204dc4aa10148d5423
Image: bitnami/mariadb:10.1.23-r2
Image ID: docker-pullable://bitnami/mariadb@sha256:e71475dc460dcc07a377633b677ceb95f3cb45e38c5f1d9ebadcf6132979ed4d
Port: 3306/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Tue, 06 Feb 2018 16:07:12 +0100
Finished: Tue, 06 Feb 2018 16:08:35 +0100
Ready: False
Restart Count: 115
Requests:
cpu: 250m
memory: 256Mi
Liveness: exec [mysqladmin ping] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [mysqladmin ping] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
MARIADB_ROOT_PASSWORD: <set to the key 'mariadb-root-password' in secret 'kasper-the-friendly-mariadb'> Optional: false
MARIADB_USER: bn_ghost
MARIADB_PASSWORD: <set to the key 'mariadb-password' in secret 'kasper-the-friendly-mariadb'> Optional: false
MARIADB_DATABASE: bitnami_ghost
ALLOW_EMPTY_PASSWORD: yes
Mounts:
/bitnami/mariadb from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fzvf2 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kasper-the-friendly-mariadb
Optional: false
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: kasper-the-friendly-mariadb
ReadOnly: false
default-token-fzvf2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fzvf2
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning Unhealthy 10m (x844 over 7h) kubelet, aks-nodepool1-22287053-0 Readiness probe failed: mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/opt/bitnami/mariadb/tmp/mysql.sock' exists!
Warning FailedSync 5m (x1353 over 7h) kubelet, aks-nodepool1-22287053-0 Error syncing pod

@andyzhangx
Copy link
Contributor

Hi @adam7 I have tried with Standard_B1s agent VM, mouting disk timeout is because Standard_B1s is too slow, format disk costing a lot more time than expected. Standard_B1s has only 1 core, 1GB RAM, if you log on it, you will find it's very slow. So I would recommend not using such lower hardware config VM.

azureuser@aks-nodepool1-15012548-0:~$ top
top - 03:26:05 up 29 min,  1 user,  load average: 5.35, 7.11, 5.80
Tasks: 183 total,   1 running, 181 sleeping,   0 stopped,   1 zombie
%Cpu(s):  2.3 us,  1.0 sy,  0.3 ni,  0.0 id, 96.0 wa,  0.0 hi,  0.3 si,  0.0 st
KiB Mem :   929852 total,    66440 free,   483200 used,   380212 buff/cache
KiB Swap:        0 total,        0 free,        0 used.   190124 avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
  8869 root      20   0  734404  83732  24072 S  1.7  9.0   0:16.03 hyperkube
  7828 root      20   0 1055440  53612   5172 S  0.7  5.8   0:39.23 dockerd
     1 root      20   0  185416   5256   3256 S  0.3  0.6   0:11.04 systemd
  2015 root      20   0  219456  18756   4592 D  0.3  2.0   0:08.08 python3
 10022 root      20   0       0      0      0 S  0.3  0.0   0:00.24 kworker/0:1

And in my env, finally disk mount succeeded after timeout:

Events:
  FirstSeen     LastSeen        Count   From                                    SubObjectPath                           Type            Reason                  Message
  ---------     --------        -----   ----                                    -------------                           --------        ------                  -------
  8m            8m              1       default-scheduler                                                               Normal          Scheduled               Successfully assigned nginx-azuredisk to aks-nodepool1-15012548-0
  7m            7m              1       kubelet, aks-nodepool1-15012548-0                                               Normal          SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-mrw8h"
  5m            5m              1       kubelet, aks-nodepool1-15012548-0                                               Warning         FailedMount             Unable to mount volumes for pod "nginx-azuredisk_default(4eb22bb2-0bb5-11e8-8
d9e-0a58ac1f0a2e)": timeout expired waiting for volumes to attach/mount for pod "default"/"nginx-azuredisk". list of unattached/unmounted volumes=[disk01]
  5m            5m              1       kubelet, aks-nodepool1-15012548-0                                               Warning         FailedSync              Error syncing pod
  4m            4m              1       kubelet, aks-nodepool1-15012548-0                                               Normal          SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "pvc-20240841-0bb5-11e8-8d9e-0a58ac1f0
a2e"
  4m            4m              1       kubelet, aks-nodepool1-15012548-0       spec.containers{nginx-azuredisk}        Normal          Pulling                 pulling image "nginx"
  3m            3m              1       kubelet, aks-nodepool1-15012548-0       spec.containers{nginx-azuredisk}        Normal          Pulled                  Successfully pulled image "nginx"
  3m            3m              1       kubelet, aks-nodepool1-15012548-0       spec.containers{nginx-azuredisk}        Normal          Created                 Created container
  2m            2m              1       kubelet, aks-nodepool1-15012548-0       spec.containers{nginx-azuredisk}        Normal          Started                 Started container

@andyzhangx
Copy link
Contributor

It costs 78s to format a disk in my Standard_B1s env, while in a Standard_D1_v2 disk, it costs only 9s, that's the reason why it fails on Standard_B1s

I0207 03:18:49.626269    8869 azure_common_linux.go:148] azureDisk - Disk "/dev/sdc" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/sdc]

I0207 03:20:07.531020    8869 azure_common_linux.go:153] azureDisk - Disk successfully formatted (mkfs): ext4 - /dev/sdc tt

@adam7
Copy link
Author

adam7 commented Feb 7, 2018

Thanks for checking @andyzhangx that confirms what I suspected. What is the minimum VM size to use if I want to use a persistent volume with my cluster?

@andyzhangx
Copy link
Contributor

I don't know, I usually use Standard_DS2_v2, 2 core CPU is at least.

@adam7
Copy link
Author

adam7 commented Feb 7, 2018

@andyzhangx any suggestion how I might find out other than testing every vm size?

Thanks

@andyzhangx
Copy link
Contributor

I think 2 core CPU, 6GB RAM should be the minimum for an agent node, I usually use Standard_DS2_v2, just paste info here if you have any other practice.

@slack slack added the question label Feb 14, 2018
@slack slack closed this as completed Feb 14, 2018
@adam7
Copy link
Author

adam7 commented Feb 14, 2018

@slack I don't think this should be closed unless the minimum requirement for an AKS node is Standard_DS2_v2
I can run a cluster of Standard_B1s with a pvc perfectly happily once I've deployed them but I have to scale the VMs up and then back down again for the install, obviously they're no use for heavy commercial workloads but it's a great gateway to using K8s on azure. Considering a Standard_DS2_v2 is 10x the price of a B1s that gateway is way too expensive for a huge number of users

@ghost
Copy link

ghost commented Apr 15, 2018

I’m running into this same issue in Canada central. Doesn’t matter the vm size, always a mount timeout. Any recommendations?

@andyzhangx
Copy link
Contributor

@devkws pls provide following info:

For the small VM size issue, pls run top in VM, if load average is larger than 2, the system is very slow

@mrdfuse
Copy link

mrdfuse commented May 16, 2018

Edit: I completely missed this is the AKS repo and not the ACS...please feel free to disregard this message.

We are also constantly running into this, sometimes need to redeploy the same pod multiple times hoping it will work. Sometimes it does after 15 minutes, sometimes never.

beta.kubernetes.io/instance-type: Standard_D2_v2

k8s version:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:59:24Z", GoVersion:"go1.9.3", Compil
er:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compil
er:"gc", Platform:"linux/amd64"}
kubectl describe po opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz -n ci
Name:           opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz
Namespace:      ci
Node:           k8s-agentpool1-31958223-2/10.240.0.4
Start Time:     Wed, 16 May 2018 16:18:59 +0200
Labels:         app=jenkins-static-slave
                pod-template-hash=1845773401
                release=opinionated-cricket
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ci","name":"opinionated-cricket-jenkins-static-slave-5d89cc7845","uid":"13
557959-...
Status:         Running
IP:             10.240.0.26
Controlled By:  ReplicaSet/opinionated-cricket-jenkins-static-slave-5d89cc7845
Containers:
  jenkins-static-slave:
    Container ID:  docker://e02b06432eafeb2867f902e94a4a5f7b96b7a8189a77f37de296482d344bf10f
    Image:         dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6
    Image ID:      docker-pullable://dela-docker.jfrog.io/dela-jenkins-static-slave@sha256:8fd435f6833ec7cac93b0feaaf47e7e1e5c6509c905c1d68b0b0a76dab35f00b
    Port:          <none>
    Args:
      c25f1aa880602a2caa20e601053db59891d5d9b0cfa40ad23291deae2514b430
      jenkins-slave-2
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 16 May 2018 16:32:21 +0200
      Finished:     Wed, 16 May 2018 16:32:22 +0200
    Ready:          False
    Restart Count:  7
    Environment:
      JENKINS_URL:        http://pilfering-wildebeest-jenkins-svc.ci
    Mounts:
      /home/jenkins/.gradle from jenkins-slave-gradle-pv (rw)
      /home/jenkins/.npm from jenkins-slave-npm-pv (rw)
      /var/run/docker.sock from docker-socket (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-p9vc8 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  docker-socket:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/docker.sock
    HostPathType:
  jenkins-slave-gradle-pv:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  jenkins-slave-gradle-pvc-2
    ReadOnly:   false
  jenkins-slave-npm-pv:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  jenkins-slave-npm-pvc-2
    ReadOnly:   false
  default-token-p9vc8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-p9vc8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kind=slave
Tolerations:     <none>
Events:
  Type     Reason                 Age                From                                Message
  ----     ------                 ----               ----                                -------
  Normal   Scheduled              13m                default-scheduler                   Successfully assigned opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz to k8s-agentpool1-31958223-2
  Normal   SuccessfulMountVolume  13m                kubelet, k8s-agentpool1-31958223-2  MountVolume.SetUp succeeded for volume "docker-socket"
  Normal   SuccessfulMountVolume  13m                kubelet, k8s-agentpool1-31958223-2  MountVolume.SetUp succeeded for volume "default-token-p9vc8"
  Normal   SuccessfulMountVolume  12m                kubelet, k8s-agentpool1-31958223-2  MountVolume.SetUp succeeded for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1"
  Warning  FailedMount            11m                kubelet, k8s-agentpool1-31958223-2  Unable to mount volumes for pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-00
0d3a3880c1)": timeout expired waiting for volumes to attach/mount for pod "ci"/"opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz". list of unattached/unmounted volumes=[jenkins-slave-npm-pv]
  Normal   SuccessfulMountVolume  11m                kubelet, k8s-agentpool1-31958223-2  MountVolume.SetUp succeeded for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1"
  Normal   Pulling                11m (x3 over 11m)  kubelet, k8s-agentpool1-31958223-2  pulling image "dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6"
  Normal   Pulled                 11m (x3 over 11m)  kubelet, k8s-agentpool1-31958223-2  Successfully pulled image "dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6"
  Normal   Created                11m (x3 over 11m)  kubelet, k8s-agentpool1-31958223-2  Created container
  Normal   Started                11m (x3 over 11m)  kubelet, k8s-agentpool1-31958223-2  Started container
  Warning  FailedSync             11m (x4 over 11m)  kubelet, k8s-agentpool1-31958223-2  Error syncing pod
  Warning  BackOff                3m (x35 over 11m)  kubelet, k8s-agentpool1-31958223-2  Back-off restarting failed container
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-storage-class
  namespace: ci
  labels:
    kubernetes.io/cluster-service: "true"
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: Shared
  location: westeurope
  cachingmode: None  
  fsType: ext4
reclaimPolicy: Retain

kubelet log

I0516 14:18:59.411834    2317 kubelet.go:1837] SyncLoop (ADD, "api"): "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)"
I0516 14:18:59.527794    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.528079    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/135aaf8b-5914-11e8-ad8e-000d3a3880c1-docker-socket") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.528170    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-p9vc8" (UniqueName: "kubernetes.io/secret/135aaf8b-5914-11e8-ad8e-000d3a3880c1-default-token-p9vc8") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.528265    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:18:59.528390    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:00.028346928 +0000 UTC (durationBeforeRetry 500ms). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (Unique
Name: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:18:59.530483    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:00.030450635 +0000 UTC (durationBeforeRetry 500ms). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (Unique
Name: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.628625    2317 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-p9vc8" (UniqueName: "kubernetes.io/secret/135aaf8b-5914-11e8-ad8e-000d3a3880c1-default-token-p9vc8") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.628722    2317 reconciler.go:257] operationExecutor.MountVolume started for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/135aaf8b-5914-11e8-ad8e-000d3a3880c1-docker-socket") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.628788    2317 operation_generator.go:484] MountVolume.SetUp succeeded for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/135aaf8b-5914-11e8-ad8e-000d3a3880c1-docker-socket") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.640336    2317 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-p9vc8" (UniqueName: "kubernetes.io/secret/135aaf8b-5914-11e8-ad8e-000d3a3880c1-default-token-p9vc8") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:18:59.784028    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:19:00.029745    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:00.033826    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:01.033804393 +0000 UTC (durationBeforeRetry 1s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:00.130470    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:00.134210    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:01.134184764 +0000 UTC (durationBeforeRetry 1s). Error: Volume not attached according to node status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:01.133794    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:01.134672    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:01.138843    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:03.138821772 +0000 UTC (durationBeforeRetry 2s). Error: Volume not attached according to node status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:01.139568    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:03.139543575 +0000 UTC (durationBeforeRetry 2s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:03.142183    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:03.142343    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:03.147411    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:07.147380986 +0000 UTC (durationBeforeRetry 4s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:03.149687    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:07.149669195 +0000 UTC (durationBeforeRetry 4s). Error: Volume not attached according to node status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:05.192594    2317 server.go:779] GET /stats/summary/: (183.909179ms) 200 [[Go-http-client/1.1] 10.240.0.76:48542]
I0516 14:19:07.161392    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:07.161476    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:07.166421    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:15.166395321 +0000 UTC (durationBeforeRetry 8s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:07.166528    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:15.166509221 +0000 UTC (durationBeforeRetry 8s). Error: Volume not attached according to node status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://
pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:09.807498    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:19:15.194321    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:15.194390    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:15.198622    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:31.198532167 +0000 UTC (durationBeforeRetry 16s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https:/
/pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:15.199660    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:19:31.199642571 +0000 UTC (durationBeforeRetry 16s). Error: Volume not attached according to node status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https:/
/pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:19.842443    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:19:29.858621    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:19:31.251152    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:31.251313    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:31.255058    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:20:03.255017125 +0000 UTC (durationBeforeRetry 32s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https:/
/pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:19:31.255525    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:20:03.255507627 +0000 UTC (durationBeforeRetry 32s). Error: Volume not attached according to node status for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https:/
/pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:19:39.880715    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:19:49.901755    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:19:59.941446    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:20:03.324514    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:20:03.324594    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
E0516 14:20:03.328377    2317 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd\"" failed. No retries permitted until 2018-05-16 14:21:07.328211776 +0000 UTC (durationBeforeRetry 1m4s). Error: Volume not attached according to node status for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https:
//pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:20:03.329087    2317 operation_generator.go:684] Controller attach succeeded for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") device path: "2"
I0516 14:20:03.424940    2317 reconciler.go:257] operationExecutor.MountVolume started for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:20:03.425007    2317 operation_generator.go:416] MountVolume.WaitForAttach entering for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") DevicePath "2"
I0516 14:20:05.315878    2317 server.go:779] GET /stats/summary/: (284.369149ms) 200 [[Go-http-client/1.1] 10.240.0.76:48542]
I0516 14:20:06.363237    2317 azure_common_linux.go:194] azureDisk - Disk "/dev/disk/azure/scsi1/lun2" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/disk/azure/scsi1/lun2]
I0516 14:20:07.375716    2317 azure_common_linux.go:199] azureDisk - Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun2 tt
W0516 14:20:07.375746    2317 azure_common_linux.go:201] azureDisk - format of disk "/dev/disk/azure/scsi1/lun2" failed: type:("ext4") target:("tt") options:("o")error:(<nil>)
I0516 14:20:07.375782    2317 operation_generator.go:425] MountVolume.WaitForAttach succeeded for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:20:07.596862    2317 operation_generator.go:446] MountVolume.MountDevice succeeded for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") device mount path "/var/lib/kubelet/plugins/kubernetes.io/azure-
disk/mounts/b3835638752"
I0516 14:20:07.615058    2317 azure_mounter.go:162] azureDisk - successfully mounted disk dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1 on /var/lib/kubelet/pods/135aaf8b-5914-11e8-ad8e-000d3a3880c1/volumes/kubernetes.io~azure-disk/pvc-1342af71-5914-11e8-ad8e-000d3a3880c1
I0516 14:20:07.615841    2317 operation_generator.go:484] MountVolume.SetUp succeeded for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:20:09.957677    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:20:19.978595    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:20:29.995711    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:20:39.141668    2317 reconciler.go:407] Reconciler sync states: could not find pod information in desired or actual states or pending operation, update it in both states: &{volumeName:kubernetes.io/azure-disk/b3835638752 podName:135aaf8b-5914-11e8-ad8e-000d3a3880c1 volumeSpec:0xc420f3c140 outerVolumeSpecName:pvc-1342af71-5914-11e8-ad8e-000d3a3880c1 pod:0xc42275b180 pluginIsAttachable:true volumeGidValue: devicePath: reportedInUse:false mounter:0xc4234bd000}
I0516 14:20:39.148329    2317 reconciler.go:547] Volume: kubernetes.io/azure-disk/b3835638752 is mounted
I0516 14:20:40.012031    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:20:50.029116    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:21:00.057488    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
E0516 14:21:02.413357    2317 kubelet.go:1628] Unable to mount volumes for pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)": timeout expired waiting for volumes to attach/mount for pod "ci"/"opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz". list of unattached/unmounted volumes=[jenkins-slave-npm-pv]; skipping pod
E0516 14:21:02.413387    2317 pod_workers.go:182] Error syncing pod 135aaf8b-5914-11e8-ad8e-000d3a3880c1 ("opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)"), skipping: timeout expired waiting for volumes to attach/mount for pod "ci"/"opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz". list of unattached/unmounted volumes=[jenkins-slave-npm-pv]
I0516 14:21:05.182104    2317 server.go:779] GET /stats/summary/: (161.103794ms) 200 [[Go-http-client/1.1] 10.240.0.76:48542]
I0516 14:21:07.339977    2317 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:21:07.343924    2317 operation_generator.go:684] Controller attach succeeded for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") device path: "3"
I0516 14:21:07.440489    2317 reconciler.go:257] operationExecutor.MountVolume started for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:21:07.440578    2317 operation_generator.go:416] MountVolume.WaitForAttach entering for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") DevicePath "3"
I0516 14:21:10.075000    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:21:10.352227    2317 azure_common_linux.go:194] azureDisk - Disk "/dev/disk/azure/scsi1/lun3" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/disk/azure/scsi1/lun3]
I0516 14:21:11.515228    2317 azure_common_linux.go:199] azureDisk - Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun3 tt
W0516 14:21:11.515300    2317 azure_common_linux.go:201] azureDisk - format of disk "/dev/disk/azure/scsi1/lun3" failed: type:("ext4") target:("tt") options:("o")error:(<nil>)
I0516 14:21:11.516461    2317 operation_generator.go:425] MountVolume.WaitForAttach succeeded for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:21:11.982201    2317 operation_generator.go:446] MountVolume.MountDevice succeeded for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") device mount path "/var/lib/kubelet/plugins/kubernetes.io/azure-
disk/mounts/b879764857"
I0516 14:21:12.030003    2317 azure_mounter.go:162] azureDisk - successfully mounted disk dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1 on /var/lib/kubelet/pods/135aaf8b-5914-11e8-ad8e-000d3a3880c1/volumes/kubernetes.io~azure-disk/pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1
I0516 14:21:12.030347    2317 operation_generator.go:484] MountVolume.SetUp succeeded for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1")
I0516 14:21:17.704172    2317 kuberuntime_manager.go:374] No sandbox for pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)" can be found. Need to start a new one
I0516 14:21:18.563633    2317 provider.go:119] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
I0516 14:21:18.564441    2317 provider.go:119] Refreshing cache for provider: *azure.acrProvider
I0516 14:21:18.863637    2317 kubelet.go:1871] SyncLoop (PLEG): "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)", event: &pleg.PodLifecycleEvent{ID:"135aaf8b-5914-11e8-ad8e-000d3a3880c1", Type:"ContainerStarted", Data:"ae76ffbe106ea65edcd1a16f9bc9ea1994f5211b06748d40996429b714bc1943"}
I0516 14:21:19.834711    2317 kube_docker_client.go:333] Stop pulling image "dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6": "Status: Image is up to date for dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6"
I0516 14:21:20.100834    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:21:20.888560    2317 kubelet.go:1871] SyncLoop (PLEG): "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)", event: &pleg.PodLifecycleEvent{ID:"135aaf8b-5914-11e8-ad8e-000d3a3880c1", Type:"ContainerStarted", Data:"03b1787f2f27d234186cad23dbd28c33746bb93ccc033f506f3d9db6e639eb0d"}
I0516 14:21:21.904037    2317 kubelet.go:1871] SyncLoop (PLEG): "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)", event: &pleg.PodLifecycleEvent{ID:"135aaf8b-5914-11e8-ad8e-000d3a3880c1", Type:"ContainerDied", Data:"03b1787f2f27d234186cad23dbd28c33746bb93ccc033f506f3d9db6e639eb0d"}
I0516 14:21:22.207604    2317 kuberuntime_manager.go:503] Container {Name:jenkins-static-slave Image:dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6 Command:[] Args:[c25f1aa880602a2caa20e601053db59891d5d9b0cfa40ad23291deae2514b430 jenkins-slave-2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:JENKINS_URL Value:http://pilfering-wildebeest-jenkins-svc.ci ValueFrom:nil} {Name:SONARQUBE_LOGIN Value:xxx ValueFrom:nil} {Name:SONARQUBE_HOST Value:ht
tps://sonar.delanet.be ValueFrom:nil} {Name:dockerRegistry Value:dela-docker.jfrog.io ValueFrom:nil} {Name:dockerImagePrefix Value:dela- ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:docker-socket ReadOnly:false MountPath:/var/run/docker.sock SubPath: MountPropagation:<nil>} {Name:jenkins-slave-gradle-pv ReadOnly:false MountPath:/home/jenkins/.gradle SubPath: MountPropagation:<nil>} {Name:jenkins-slave-npm-pv ReadOnly:false MountPath:/home/jenkins/.np
m SubPath: MountPropagation:<nil>} {Name:default-token-p9vc8 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
I0516 14:21:22.207725    2317 kuberuntime_manager.go:742] checking backoff for container "jenkins-static-slave" in pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)"
I0516 14:21:22.670854    2317 kube_docker_client.go:333] Stop pulling image "dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6": "Status: Image is up to date for dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6"
I0516 14:21:23.940660    2317 kubelet.go:1871] SyncLoop (PLEG): "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)", event: &pleg.PodLifecycleEvent{ID:"135aaf8b-5914-11e8-ad8e-000d3a3880c1", Type:"ContainerStarted", Data:"82102111df23da29c5090dee099b642e56befbfdc7c0f658885e76ff3f1ab31d"}
I0516 14:21:24.960235    2317 kubelet.go:1871] SyncLoop (PLEG): "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz_ci(135aaf8b-5914-11e8-ad8e-000d3a3880c1)", event: &pleg.PodLifecycleEvent{ID:"135aaf8b-5914-11e8-ad8e-000d3a3880c1", Type:"ContainerDied", Data:"82102111df23da29c5090dee099b642e56befbfdc7c0f658885e76ff3f1ab31d"}
I0516 14:21:25.264084    2317 kuberuntime_manager.go:503] Container {Name:jenkins-static-slave Image:dela-docker.jfrog.io/dela-jenkins-static-slave:1.0.6 Command:[] Args:[c25f1aa880602a2caa20e601053db59891d5d9b0cfa40ad23291deae2514b430 jenkins-slave-2] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:JENKINS_URL Value:http://pilfering-wildebeest-jenkins-svc.ci ValueFrom:nil} {Name:SONARQUBE_LOGIN Value:xxx ValueFrom:nil} {Name:SONARQUBE_HOST Value:ht
tps://sonar.delanet.be ValueFrom:nil} {Name:dockerRegistry Value:dela-docker.jfrog.io ValueFrom:nil} {Name:dockerImagePrefix Value:dela- ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:docker-socket ReadOnly:false MountPath:/var/run/docker.sock SubPath: MountPropagation:<nil>} {Name:jenkins-slave-gradle-pv ReadOnly:false MountPath:/home/jenkins/.gradle SubPath: MountPropagation:<nil>} {Name:jenkins-slave-npm-pv ReadOnly:false MountPath:/home/jenkins/.np

Maybe these lines are relevant?

I0516 14:21:10.352227    2317 azure_common_linux.go:194] azureDisk - Disk "/dev/disk/azure/scsi1/lun3" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/disk/azure/scsi1/lun3]
I0516 14:21:11.515228    2317 azure_common_linux.go:199] azureDisk - Disk successfully formatted (mkfs): ext4 - /dev/disk/azure/scsi1/lun3 tt
W0516 14:21:11.515300    2317 azure_common_linux.go:201] azureDisk - format of disk "/dev/disk/azure/scsi1/lun3" failed: type:("ext4") target:("tt") options:("o")error:(<nil>)

@andyzhangx
Copy link
Contributor

@mrdfuse just loop me in if you have such azure disk issue.
In the log, I could see finally both of your azuredisk PVC are mounted into pod successfully, one is mounted as /dev/disk/azure/scsi1/lun2 and another is mounted as /dev/disk/azure/scsi1/lun3 in agent node.

Time cost for Azure Disk PVC mount

Time cost for Azure Disk PVC mount on a standard node size(e.g. Standard_D2_V2) is around 1 minute, and there is a PR using cache fix to fix this issue, which could reduce the mount time cost to around 30s.
In your current logs of v1.8.7 cluster , two azure disk mount would cost around 2 min, which is expected.

I pasted the useful logs in the below, volume mounting costing more than 2 min would be timeout, mounting two azure disks could cost a little than 2 min, there could be timeout error, while it retried and it succeeded finally. And we have fixed this issue by improve the mount time from 1min to 30s, details could be found here

I0516 14:20:03.425007    2317 operation_generator.go:416] MountVolume.WaitForAttach entering for volume "pvc-1342af71-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1342af71-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") DevicePath "2"
I0516 14:20:05.315878    2317 server.go:779] GET /stats/summary/: (284.369149ms) 200 [[Go-http-client/1.1] 10.240.0.76:48542]
I0516 14:20:06.363237    2317 azure_common_linux.go:194] azureDisk - Disk "/dev/disk/azure/scsi1/lun2" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/disk/azure/scsi1/lun2]
...

I0516 14:21:07.440578    2317 operation_generator.go:416] MountVolume.WaitForAttach entering for volume "pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1" (UniqueName: "kubernetes.io/azure-disk/https://pvc1001418384002.blob.core.windows.net/1001418384/dela-acs-dev-dynamic-pvc-1351aa76-5914-11e8-ad8e-000d3a3880c1.vhd") pod "opinionated-cricket-jenkins-static-slave-5d89cc7845-bd8kz" (UID: "135aaf8b-5914-11e8-ad8e-000d3a3880c1") DevicePath "3"
I0516 14:21:10.075000    2317 kubelet_node_status.go:503] Using Node Hostname from cloudprovider: "k8s-agentpool1-31958223-2"
I0516 14:21:10.352227    2317 azure_common_linux.go:194] azureDisk - Disk "/dev/disk/azure/scsi1/lun3" appears to be unformatted, attempting to format as type: "ext4" with options: [-E lazy_itable_init=0,lazy_journal_init=0 -F /dev/disk/azure/scsi1/lun3]

And in your comments, there are azure disk mounting cost more than 15 minutes? what's the kubelet version? I also fixed such issue which only exists in v1.9.0 - v1.9.6, details are here:
https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md#5-azure-disk-pvc-multi-attach-error-makes-disk-mount-very-slow-or-mount-failure-forever

@mrdfuse
Copy link

mrdfuse commented May 17, 2018

While the logs may show the disks mounted successfully in the end, kubernetes never saw that and kept complaining about timeouts. I retried with a new pod, and then it succeeded after trying for a few hours (!).
Then I installed the same pod a few times after each other, but after normal business hours, and disks got attached within the minute. I have noticed this before, at times it works quickly, other times I keep getting timeouts. We have started using acs/aks 6 months ago and this issue has been there since the beginning.

I created an issue in the acs project for this as well, since I was mistaken here.
How do I know the version of the kubelet? The cluster was created using acs 0.13.1 if that helps.
Let me know what I can do to provide more diagnostics.

@andyzhangx
Copy link
Contributor

@mrdfuse just run kubectl get no and paste the result.

@mrdfuse
Copy link

mrdfuse commented May 17, 2018

NAME STATUS ROLES AGE VERSION
k8s-agentpool1-31958223-0 Ready agent 62d v1.8.7
k8s-agentpool1-31958223-1 Ready agent 62d v1.8.7
k8s-agentpool1-31958223-2 Ready agent 62d v1.8.7
k8s-agentpool1-31958223-3 Ready agent 62d v1.8.7
k8s-master-31958223-0 Ready master 62d v1.8.7

@andyzhangx
Copy link
Contributor

@mrdfuse So you tried with a new pod mount same azuredisk PVC? Is there any azuredisk moving from one node to another? azuredisk PVC could only be attached to one agent node.
Pls share whole config, like deployment, pvc, storage class, k8s version, so that I could help you do diagnostics. And you may file another new issue and just loop me in. thanks.

@mrdfuse
Copy link

mrdfuse commented May 17, 2018

New pod mount, new pvc (actually a new helm chart release). No moving disks.
Maybe it's best to follow the issue I created on the acs project? Azure/acs-engine#2989

@damienwebdev
Copy link

Chiming in here to say the same is true with B2S in Central US. Switching to a Standard D2s v3 (2 vcpus, 8 GB memory) works just fine.

If the B series VMs don't work please remove them from the portal, or at the very least put a notice in there that the B-series doesn't work for PVCs. PVCs, while not necessary for all containers, are rather important for 99% of clusters.

It's a bit frustrating to see this, be excited to save a bit of $$$ on proof-of-concept cluster, only to have to trash the cluster and bring up with another with a significantly larger cost (2x).

@andyzhangx
Copy link
Contributor

@slack I filed a new issue here to "Change default VM size to 2 CPU cores at least in azure portal of creating AKS cluster page":
#541

@andyzhangx
Copy link
Contributor

As I could see in azure portal, when creating an AKS cluster, the default VM size is Standard DS1 v2 (1 vcpus, 3.5 GB memory), actually according to my investigation, VM size with 1 core would have issue, the system is very slow, could you change default VM size to 2 CPU cores at least, e.g. Standard D2s v3 (2 vcpus, 8 GB memory)

Moreover, I think we should remove all VM size with 1 CPU core in AKS support list.

@ghost ghost locked as resolved and limited conversation to collaborators Aug 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants