diff --git a/docs/admin/authorization/abac.md b/docs/admin/authorization/abac.md index 45bd9293e1d1e..d06b5a1ef8d3c 100644 --- a/docs/admin/authorization/abac.md +++ b/docs/admin/authorization/abac.md @@ -45,7 +45,7 @@ properties: - Wildcard: - `*` matches all non-resource requests. - `/foo/*` matches all subpaths of `/foo/`. - - `readonly`, type boolean, when true, means that the policy only applies to get, list, and watch operations. + - `readonly`, type boolean, when true, means that the Resource-matching policy only applies to get, list, and watch operations, Non-resource-matching policy only applies to get operation. **NOTES:** An unset property is the same as a property set to the zero value for its type (e.g. empty string, 0, false). However, unset should be preferred for diff --git a/docs/admin/kube-controller-manager.md b/docs/admin/kube-controller-manager.md index 5a622265637e3..5d8b9eae99172 100644 --- a/docs/admin/kube-controller-manager.md +++ b/docs/admin/kube-controller-manager.md @@ -45,7 +45,7 @@ kube-controller-manager --concurrent-service-syncs int32 The number of services that are allowed to sync concurrently. Larger number = more responsive service management, but more CPU (and network) load (default 1) --concurrent-serviceaccount-token-syncs int32 The number of service account token objects that are allowed to sync concurrently. Larger number = more responsive token generation, but more CPU (and network) load (default 5) --concurrent_rc_syncs int32 The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load (default 5) - --configure-cloud-routes Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider. (default true) + --configure-cloud-routes Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider. If using a network overlay which will handle routing independent of the cloud provider, set this to false. (default true) --contention-profiling Enable lock contention profiling, if profiling is enabled --controller-start-interval duration Interval between starting controller managers. --controllers stringSlice A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. diff --git a/docs/concepts/configuration/assign-pod-node.md b/docs/concepts/configuration/assign-pod-node.md index 2e264cd8ab3f1..10ff16022ed57 100644 --- a/docs/concepts/configuration/assign-pod-node.md +++ b/docs/concepts/configuration/assign-pod-node.md @@ -168,7 +168,7 @@ and an example `preferredDuringSchedulingIgnoredDuringExecution` anti-affinity w Inter-pod affinity is specified as field `podAffinity` of field `affinity` in the PodSpec. And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity` in the PodSpec. -Here's an example of a pod that uses pod affinity: +#### An example of a pod that uses pod affinity: {% include code.html language="yaml" file="pod-with-pod-affinity.yaml" ghlink="/docs/concepts/configuration/pod-with-pod-affinity.yaml" %} @@ -206,6 +206,95 @@ If defined but empty, it means "all namespaces." All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity must be satisfied for the pod to schedule onto a node. +#### More Practical Use-cases + +Interpod Affinity and AnitAffinity can be even more useful when they are used with higher +level collections such as ReplicaSets, Statefulsets, Deployments, etc. One can easily configure that a set of workloads should +be co-located in the same defined topology, eg., the same node. + +##### Always co-located in the same node + +In a three node cluster, a web application has in-memory cache such as redis, we want the web-servers to be co-located with the cache as much as possible. +Here is the yaml snippet of a simple redis deployment with three replicas and selector label `app=store` + +```yaml +apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 +kind: Deployment +metadata: + name: redis-cache +spec: + replicas: 3 + template: + metadata: + labels: + app: store + spec: + containers: + - name: redis-server + image: redis:3.2-alpine +``` + +Below yaml snippet of the webserver deployment has `podAffinity` configured, this informs the scheduler that all its replicas are to be +co-located with pods that has selector label `app=store` + +```yaml +apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 +kind: Deployment +metadata: + name: web-server +spec: + replicas: 3 + template: + metadata: + labels: + app: web-store + spec: + affinity: + podAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - store + topologyKey: "kubernetes.io/hostname" + containers: + - name: web-app +``` + +if we create the above two deployments, our three node cluster could look like below. + +| node-1 | node-2 | node-3 | +|:--------------------:|:-------------------:|:------------------:| +| *webserver-1* | *webserver-2* | *webserver-3* | +| *cache-1* | *cache-2* | *cache-3* | + +As you can see, all the 3 replicas of the `web-server` are automatically co-located with the cache as expected. + +``` +$kubectl get pods -o wide +NAME READY STATUS RESTARTS AGE IP NODE +redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3 +redis-cache-1450370735-j2j96 1/1 Running 0 8m 10.192.2.2 kube-node-1 +redis-cache-1450370735-z73mh 1/1 Running 0 8m 10.192.3.1 kube-node-2 +web-server-1287567482-5d4dz 1/1 Running 0 7m 10.192.2.3 kube-node-1 +web-server-1287567482-6f7v5 1/1 Running 0 7m 10.192.4.3 kube-node-3 +web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3.2 kube-node-2 +``` + +Best practice is to configure these highly available stateful workloads such as redis with AntiAffinity rules for more guaranteed spreading, which we will see in the next section. + +##### Never co-located in the same node + +Highly Available database statefulset has one master and three replicas, one may prefer none of the database instances to be co-located in the same node. + +| node-1 | node-2 | node-3 | node-4 | +|:--------------------:|:-------------------:|:------------------:|:------------------:| +| *DB-MASTER* | *DB-REPLICA-1* | *DB-REPLICA-2* | *DB-REPLICA-3* | + +[Here](https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) is an example of zookeper statefulset configured with anti-affinity for high availablity. + For more information on inter-pod affinity/anti-affinity, see the design doc [here](https://git.k8s.io/community/contributors/design-proposals/podaffinity.md). diff --git a/docs/concepts/storage/volumes.md b/docs/concepts/storage/volumes.md index ead665d68f71a..54713a9ffc99b 100644 --- a/docs/concepts/storage/volumes.md +++ b/docs/concepts/storage/volumes.md @@ -289,10 +289,10 @@ spec: ### nfs -A `nfs` volume allows an existing NFS (Network File System) share to be +An `nfs` volume allows an existing NFS (Network File System) share to be mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of an `nfs` volume are preserved and the volume is merely -unmounted. This means that a NFS volume can be pre-populated with data, and +unmounted. This means that an NFS volume can be pre-populated with data, and that data can be "handed off" between pods. NFS can be mounted by multiple writers simultaneously. @@ -322,7 +322,7 @@ See the [iSCSI example](https://github.com/kubernetes/kubernetes/tree/{{page.git ### fc (fibre channel) -A `fc` volume allows an existing fibre channel volume to be mounted into your pod. +An `fc` volume allows an existing fibre channel volume to be mounted into your pod. You can specify single or multiple target World Wide Names to the parameter targetWWNs in your volume configuration. If multiple WWNs are specified, targetWWNs expects that those WWNs form multipath connection. @@ -365,7 +365,7 @@ See the [GlusterFS example](https://github.com/kubernetes/kubernetes/tree/{{page ### rbd -A `rbd` volume allows a [Rados Block +An `rbd` volume allows a [Rados Block Device](http://ceph.com/docs/master/rbd/rbd/) volume to be mounted into your pod. Unlike `emptyDir`, which is erased when a Pod is removed, the contents of a `rbd` volume are preserved and the volume is merely unmounted. This diff --git a/netlify.toml b/netlify.toml index e92a6153002c3..bac7e0b5abe4c 100644 --- a/netlify.toml +++ b/netlify.toml @@ -4,6 +4,3 @@ [context.deploy-preview] command = "make build-preview" - -[context.vnext-staging] - command = "make build && cp netlify_noindex_headers.txt _site/_headers" diff --git a/netlify_noindex_headers.txt b/netlify_noindex_headers.txt new file mode 100644 index 0000000000000..c45a1be4250c0 --- /dev/null +++ b/netlify_noindex_headers.txt @@ -0,0 +1,3 @@ +# Prevent bots from indexing site +/* + X-Robots-Tag: noindex