Skip to content
This repository has been archived by the owner on Jan 24, 2023. It is now read-only.

PersistantVolumeClaims changing to Read-only file system suddenly. #113

Closed
smileisak opened this issue Mar 21, 2018 · 6 comments
Closed

Comments

@smileisak
Copy link

Is this a request for help?:


**this a BUG REPORT **:

Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)

$ ➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

What happened:
I'm using Kubernetes cluster for my production. I have some issues with Persistance using
PersistantVolumeClaims by provisioning PersistantVolumes with default StorageClass.

I have all my databases (all pods using persistantVolumes ) that CrashloopBackoff after a period of time. When logging, it looks like the pv mount changes to ReadOnly.

Exemple of logs for mongodb

mongo-0                                              0/1       CrashLoopBackOff   6          6d
$ ➜ k logs -f mongo-0 
chown: changing ownership of '/data/db': Read-only file system

This happen with all pods that uses persistantVolumeClamis

What you expected to happen:

All Azure disk should be mounted as ReadWrite and don't change during runtime.

How to reproduce it (as minimally and precisely as possible):
Create a persistant volume claim from Default StorageClass and mount it in a pod. The time to reproduce this problem is random.

Anything else we need to know:

@andyzhangx
Copy link

@smileisak Could you check how many azure disks are attached in your VM? Recently we found a host cache setting issue when there are more than 5 azure data disks, the disk may become unavailable, below are the details:
https://github.com/andyzhangx/demo/blob/master/issues/README.md#2-disk-unavailable-after-attachdetach-a-data-disk-on-a-node

@peskybp
Copy link

peskybp commented Apr 27, 2018

@andyzhangx Is there someway to actually prevent the cluster from scheduling more than 5 data disk mounts onto a specific VM?

In numerous cases I am seeing our clusters unfortunately schedule all the pods that need mounts onto the same machine, thus exceeding not only 5 mounts as you suggest but attempting to exceed the 8 data disk max that the specific VM itself can support.

@peskybp
Copy link

peskybp commented Apr 27, 2018

@andyzhangx Also, I couldn't find anything in the docs you linked that referenced issues related to "5 data disks" or anything along those lines.

Were you referring to the caching mode issues in the section entitled "2. disk unavailable after attach/detach a data disk on a node" (where the workaround is to set cachingmode: None explicitily)?

@andyzhangx
Copy link

@peskybp correct, you should set cachingmode as None. details: https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md#2-disk-unavailable-after-attachdetach-a-data-disk-on-a-node

About the maximum data disks support, there is a design proposal in k8s community already: Add a design proposal for dynamic volume limits, should be fixed in v1.11

@gnufied
Copy link

gnufied commented May 11, 2018

Even without my proposal - for AzureDisk type it should be possible to set limit in latest k8s release. Can you try setting KUBE_MAX_PD_VOLS env variable to something like 5 and restart scheduler?

Fortunately scheduler already ships with some builtin intelligence for AzureDisks - https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithm/predicates/predicates.go#L106 and you probably don't need to wait for new design.

@andyzhangx
Copy link

@gnufied thanks for the solution!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants