diff --git a/BETA_LIMITATIONS.md b/BETA_LIMITATIONS.md index 3642f38a2f..758fc9e448 100644 --- a/BETA_LIMITATIONS.md +++ b/BETA_LIMITATIONS.md @@ -8,12 +8,12 @@ This is a list of beta limitations: * [Latency](#latency): GLBC is not built for performance. Creating many Ingresses at a time can overwhelm it. It won't fall over, but will take its own time to churn through the Ingress queue. * [Quota](#quota): By default, GCE projects are granted a quota of 3 Backend Services. This is insufficient for most Kubernetes clusters. * [Oauth scopes](https://cloud.google.com/compute/docs/authentication): By default GKE/GCE clusters are granted "compute/rw" permissions. If you setup a cluster without these permissions, GLBC is useless and you should delete the controller as described in the [section below](#disabling-glbc). If you don't delete the controller it will keep restarting. -* [Default backends](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case): All L7 Loadbalancers created by GLBC have a default backend. If you don't specify one in your Ingress, GLBC will assign the 404 default backend mentioned above. +* [Default backends](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case): All L7 loadbalancers created by GLBC have a default backend. If you don't specify one in your Ingress, GLBC will assign the 404 default backend mentioned above. * [Load Balancing Algorithms](#load-balancing-algorithms): The ingress controller doesn't support fine grained control over loadbalancing algorithms yet. * [Large clusters](#large-clusters): Ingress on GCE isn't supported on large (>1000 nodes), single-zone clusters. * [Teardown](README.md#deletion): The recommended way to tear down a cluster with active Ingresses is to either delete each Ingress, or hit the `/delete-all-and-quit` endpoint on GLBC, before invoking a cluster teardown script (eg: kube-down.sh). You will have to manually cleanup GCE resources through the [cloud console](https://cloud.google.com/compute/docs/console#access) or [gcloud CLI](https://cloud.google.com/compute/docs/gcloud-compute/) if you simply tear down the cluster with active Ingresses. * [Changing UIDs](#changing-the-cluster-uid): You can change the UID used as a suffix for all your GCE cloud resources, but this requires you to delete existing Ingresses first. -* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might've leaked due to premature teardown through the GCE console. +* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might have leaked due to premature teardown through the GCE console. ## Prerequisites @@ -33,7 +33,7 @@ See [GCE documentation](https://cloud.google.com/compute/docs/resource-quotas#ch ## Latency -It takes ~1m to spin up a loadbalancer (this includes acquiring the public ip), and ~5-6m before the GCE api starts healthchecking backends. So as far as latency goes, here's what to expect: +It takes ~1m to spin up a loadbalancer (this includes acquiring the public IP), and ~5-6m before the GCE API starts healthchecking backends. So as far as latency goes, here's what to expect: Assume one creates the following simple Ingress: ```yaml @@ -98,7 +98,7 @@ GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances ## Load Balancing Algorithms -Right now, a kube-proxy nodePort is a necessary condition for Ingress on GCP. This is because the cloud lb doesn't understand how to route directly to your pods. Incorporating kube-proxy and cloud lb algorithms so they cooperate toward a common goal is still a work in progress. If you really want fine grained control over the algorithm, you should deploy the nginx ingress controller. +Right now, a kube-proxy NodePort service is a necessary condition for Ingress on GCP. This is because the cloud lb doesn't understand how to route directly to your pods. Incorporating kube-proxy and cloud lb algorithms so they cooperate toward a common goal is still a work in progress. If you really want fine grained control over the algorithm, you should deploy the nginx ingress controller. ## Large clusters @@ -106,7 +106,7 @@ Ingress is not yet supported on single zone clusters of size > 1000 nodes ([issu ## Disabling GLBC -To completely stop the Ingress controller on GCE/GKE, please see [this] (/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller) faq. +To completely stop the Ingress controller on GCE/GKE, please see [this] (/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller) FAQ. ## Changing the cluster UID @@ -147,7 +147,7 @@ above, and reset it to a string bereft of `--`. If you deleted a GKE/GCE cluster without first deleting the associated Ingresses, the controller would not have deleted the associated cloud resources. If you find yourself in such a situation, you can delete the resources by hand: 1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing" -2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID +2. Find the loadbalancer you'd like to delete, it should have a name formatted as: `k8s-um-ns-name--UUID` 3. Delete it, check the boxes to also cascade the deletion down to associated resources (eg: backend-services) 4. Switch to the "Compute Engine" tab, then choose "Instance Groups" -5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID +5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: `k8s-ig-UUID` diff --git a/README.md b/README.md index febee8a757..d12e40272e 100644 --- a/README.md +++ b/README.md @@ -36,9 +36,9 @@ An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches th To achieve L7 loadbalancing through Kubernetes, we employ a resource called `Ingress`. The Ingress is consumed by this loadbalancer controller, which creates the following GCE resource graph: -[Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules) -> [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies) -> [Url Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map) -> [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) -> [Instance Group](https://cloud.google.com/compute/docs/instance-groups/) +[Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules) -> [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies) -> [URL Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map) -> [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) -> [Instance Group](https://cloud.google.com/compute/docs/instance-groups/) -The controller (glbc) manages the lifecycle of each component in the graph. It uses the Kubernetes resources as a spec for the desired state, and the GCE cloud resources as the observed state, and drives the observed to the desired. If an edge is disconnected, it fixes it. Each Ingress translates to a new GCE L7, and the rules on the Ingress become paths in the GCE Url Map. This allows you to route traffic to various backend Kubernetes Services through a single public IP, which is in contrast to `Type=LoadBalancer`, which allocates a public IP *per* Kubernetes Service. For this to work, the Kubernetes Service *must* have Type=NodePort. +The controller (GLBC) manages the lifecycle of each component in the graph. It uses the Kubernetes resources as a spec for the desired state, and the GCE cloud resources as the observed state, and drives the observed to the desired. If an edge is disconnected, it fixes it. Each Ingress translates to a new GCE L7, and the rules on the Ingress become paths in the GCE URL Map. This allows you to route traffic to various backend Kubernetes Services through a single public IP, which is in contrast to `Type=LoadBalancer`, which allocates a public IP *per* Kubernetes Service. For this to work, the Kubernetes Service *must* have Type=NodePort. ### The Ingress @@ -59,26 +59,26 @@ An Ingress in Kubernetes is a REST object, similar to a Service. A minimal Ingre 12. servicePort: 80 ``` -POSTing this to the Kubernetes API server would result in glbc creating a GCE L7 that routes all traffic sent to `http://ip-of-loadbalancer/hostless` to :80 of the service named `test`. If the service doesn't exist yet, or doesn't have a nodePort, glbc will allocate an IP and wait till it does. Once the Service shows up, it will create the required path rules to route traffic to it. +`POST` calls to the Kubernetes API server would cause GLBC to create a GCE L7 that routes all traffic sent to `http://ip-of-loadbalancer/hostless` to :80 of the service named `test`. If the service doesn't exist yet or isn't type NodePort, then GLBC will allocate an IP and wait until it does. Once the Service shows up, it will create the required path rules to route traffic. -__Lines 1-4__: Resource metadata used to tag GCE resources. For example, if you go to the console you would see a url map called: k8-fw-default-hostlessendpoint, where default is the namespace and hostlessendpoint is the name of the resource. The Kubernetes API server ensures that namespace/name is unique so there will never be any collisions. +__Lines 1-4__: Resource metadata used to tag GCE resources. For example, if you go to the console you would see a URL Map called: `k8-fw-default-hostlessendpoint`, where default is the namespace and `hostlessendpoint` is the name of the resource. The Kubernetes API server ensures that namespace/name is unique so there will never be any collisions. -__Lines 5-7__: Ingress Spec has all the information needed to configure a GCE L7. Most importantly, it contains a list of `rules`. A rule can take many forms, but the only rule relevant to glbc is the `http` rule. +__Lines 5-7__: Ingress Spec has all the information needed to configure a GCE L7. Most importantly, it contains a list of `rules`. A rule can take many forms, but the only rule relevant to GLBC is the `http` rule. -__Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to `*` in this example), a list of paths (eg: `/hostless`) each of which has an associated backend (`test:80`). Both the `host` and `path` must match the content of an incoming request before the L7 directs traffic to the `backend`. +__Lines 8-9__: Each HTTP rule contains the following information: A host (eg: foo.bar.com, defaults to `*` in this example), a list of paths (eg: `/hostless`) each of which has an associated backend (`test:80`). Both the `host` and `path` must match the content of an incoming request before the L7 directs traffic to the `backend`. -__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule. The `port` is the desired `spec.ports[*].port` from the Service Spec -- Note, though, that the L7 actually directs traffic to the corresponding `NodePort`. +__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule. The `port` is the desired `spec.ports[*].port` from the Service Spec -- Note, though, that the L7 actually directs traffic to the port's corresponding `NodePort`. -__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc. +__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of GLBC. ## Load Balancer Management -You can manage a GCE L7 by creating/updating/deleting the associated Kubernetes Ingress. +You can manage a GCE L7 by creating, updating, or deleting the associated Kubernetes Ingress. ### Creation -Before you can start creating Ingress you need to start up glbc. We can use the rc.yaml in this directory: +Before you can start creating Ingress you need to start up GLBC. We can use the rc.yaml in this directory: ```shell $ kubectl create -f rc.yaml replicationcontroller "glbc" created @@ -90,13 +90,13 @@ glbc-6m6b6 2/2 Running 0 21s A couple of things to note about this controller: * It needs a service with a node port to use as the default backend. This is the backend that's used when an Ingress does not specify the default. -* It has an intentionally long terminationGracePeriod, this is only required with the --delete-all-on-quit flag (see [Deletion](#deletion)) +* It has an intentionally long `terminationGracePeriod`, this is only required with the --delete-all-on-quit flag (see [Deletion](#deletion)) * Don't start 2 instances of the controller in a single cluster, they will fight each other. The loadbalancer controller will watch for Services, Nodes and Ingress. Nodes already exist (the nodes in your cluster). We need to create the other 2. You can do so using the ingress-app.yaml in this directory. A couple of things to note about the Ingress: -* It creates a Replication Controller for a simple echoserver application, with 1 replica. +* It creates a Replication Controller for a simple "echoserver" application, with 1 replica. * It creates 3 services for the same application pod: echoheaders[x, y, default] * It creates an Ingress with 2 hostnames and 3 endpoints (foo.bar.com{/foo} and bar.baz.com{/foo, /bar}) that access the given service @@ -132,7 +132,7 @@ I1005 22:11:34.385161 1 utils.go:83] Syncing e2e-test-beeps-minion-ugv1 ... ``` -When it's done, it will update the status of the Ingress with the ip of the L7 it created: +When it's done, it will update the status of the Ingress with the IP of the L7 it created: ```shell $ kubectl get ing NAME RULE BACKEND ADDRESS @@ -145,11 +145,11 @@ echomap - echoheadersdefault:80 107.178.254.239 ``` Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel: -* A Global Forwarding Rule -* An UrlMap -* A TargetHTTPProxy -* BackendServices (one for each Kubernetes nodePort service) -* An Instance Group (with ports corresponding to the BackendServices) +* Global Forwarding Rule +* URL Map +* TargetHTTPProxy +* Backend Services (one for each Kubernetes NodePort service) +* An Instance Group (with ports corresponding to the Backend Services) The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healthchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy: @@ -182,7 +182,7 @@ You can also edit `/etc/hosts` instead of using `--resolve`. #### Updates -Say you don't want a default backend and you'd like to allow all traffic hitting your loadbalancer at /foo to reach your echoheaders backend service, not just the traffic for foo.bar.com. You can modify the Ingress Spec: +Say you don't want a default backend and you'd like to allow all traffic hitting your loadbalancer at `/foo` to reach your echoheaders backend service, not just the traffic for foo.bar.com. You can modify the Ingress Spec: ```yaml spec: @@ -218,14 +218,12 @@ for me but it might not have some of the features you want. If you would A couple of things to note about this particular update: * An Ingress without a default backend inherits the backend of the Ingress controller. -* A IngressRule without a host gets the wildcard. This is controller specific, some loadbalancer controllers do not respect anything but a DNS subdomain as the host. You *cannot* set the host to a regex. +* A IngressRule without a host gets the wildcard. This is controller specific, some loadbalancer controllers do not respect anything but a DNS subdomain as the host. You *cannot* set the host to a regular expression. * You never want to delete then re-create an Ingress, as it will result in the controller tearing down and recreating the loadbalancer. -__Unexpected updates__: Since glbc constantly runs a control loop it won't allow you to break links that black hole traffic. An easy link to break is the url map itself, but you can also disconnect a target proxy from the urlmap, or remove an instance from the instance group (note this is different from *deleting* the instance, the loadbalancer controller will not recreate it if you do so). Modify one of the url links in the map to point to another backend through the GCE Control Panel UI, and wait till the controller sync (this happens as frequently as you tell it to, via the --resync-period flag). The same goes for the Kubernetes side of things, the API server will validate against obviously bad updates, but if you relink an Ingress so it points to the wrong backends the controller will blindly follow. - ### Paths -Till now, our examples were simplified in that they hit an endpoint with a catch-all path regex. Most real world backends have subresources. Let's create service to test how the loadbalancer handles paths: +Till now, our examples were simplified in that they hit an endpoint with a catch-all path regular expression. Most real world backends have sub-resources. Let's create service to test how the loadbalancer handles paths: ```yaml apiVersion: v1 kind: ReplicationController @@ -316,7 +314,7 @@ As before, wait a while for the update to take effect, and try accessing `loadba #### Deletion -Most production loadbalancers live as long as the nodes in the cluster and are torn down when the nodes are destroyed. That said, there are plenty of use cases for deleting an Ingress, deleting a loadbalancer controller, or just purging external loadbalancer resources altogether. Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl: +Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl: ```shell $ kubectl delete ing echomap $ kubectl logs --follow glbc-6m6b6 l7-lb-controller @@ -329,7 +327,7 @@ I1007 00:26:02.043188 1 backends.go:134] Deleting backend k8-be-30301 I1007 00:26:05.591140 1 backends.go:134] Deleting backend k8-be-30284 I1007 00:26:09.159016 1 controller.go:232] Finished syncing default/echomap ``` -Note that it takes ~30 seconds to purge cloud resources, the API calls to create and delete are a onetime cost. GCE BackendServices are ref-counted and deleted by the controller as you delete Kubernetes Ingress'. This is not sufficient for cleanup, because you might have deleted the Ingress while glbc was down, in which case it would leak cloud resources. You can delete the glbc and purge cloud resources in 2 more ways: +Note that it takes ~30 seconds per ingress to purge cloud resources. This may not be a sufficient cleanup because you might have deleted the Ingress while GLBC was down, in which case it would leak cloud resources. You can delete the GLBC and purge cloud resources in two more ways: __The dev/test way__: If you want to delete everything in the cloud when the loadbalancer controller pod dies, start it with the --delete-all-on-quit flag. When a pod is killed it's first sent a SIGTERM, followed by a grace period (set to 10minutes for loadbalancer controllers), followed by a SIGKILL. The controller pod uses this time to delete cloud resources. Be careful with --delete-all-on-quit, because if you're running a production glbc and the scheduler re-schedules your pod for some reason, it will result in a loss of availability. You can do this because your rc.yaml has: ```yaml @@ -378,16 +376,16 @@ You just instructed the loadbalancer controller to quit, however if it had done Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP(S) health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. -2. Expose an arbitrary url as a `readiness` probe on the pods backing the Service. +2. Expose an arbitrary URL as a `readiness` probe on the pods backing the Service. The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](/examples/health-checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check. ## Frontend HTTPS -For encrypted communication between the client to the load balancer, you need to specify a TLS private key and certitificate to be used by the ingress controller. +For encrypted communication between the client to the load balancer, you need to specify a TLS private key and certificate to be used by the ingress controller. Ingress controller can read the private key and certificate from 2 sources: -* kubernetes [secret](http://kubernetes.io/docs/user-guide/secrets). +* Kubernetes [secret](http://kubernetes.io/docs/user-guide/secrets). * [GCP SSL certificate](https://cloud.google.com/compute/docs/load-balancing/http/ssl-certificates). @@ -396,7 +394,7 @@ Currently the Ingress only supports a single TLS port, 443, and assumes TLS term ### Secret For the ingress controller to use the certificate and private key stored in a -kubernetes secret, user needs to specify the secret name in the TLS configuration section +Kubernetes secret, user needs to specify the secret name in the TLS configuration section of their ingress spec. The secret is assumed to exist in the same namespace as the ingress. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. @@ -430,12 +428,12 @@ spec: servicePort: 80 ``` -This creates 2 GCE forwarding rules that use a single static ip. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress. +This creates 2 GCE forwarding rules that use a single static IP. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress. ### GCP SSL Cert For the ingress controller to use the certificate and private key stored in a -GCP SSL cert, user needs to specify the ssl cert name using the `ingress.gcp.kubernetes.io/pre-shared-cert` annotation. +GCP SSL cert, user needs to specify the SSL cert name using the `ingress.gcp.kubernetes.io/pre-shared-cert` annotation. The certificate in this case is managed by the user and it is their responsibility to create/delete it. The Ingress controller assigns the SSL certificate with this name to the target proxies of the Ingress. @@ -594,7 +592,7 @@ Note that the GCLB health checks *do not* get the `301` because they don't inclu #### Blocking HTTP -You can block traffic on `:80` through an annotation. You might want to do this if all your clients are only going to hit the loadbalancer through https and you don't want to waste the extra GCE forwarding rule, eg: +You can block traffic on `:80` through an annotation. You might want to do this if all your clients are only going to hit the loadbalancer through HTTPS and you don't want to waste the extra GCE forwarding rule, eg: ```yaml apiVersion: extensions/v1beta1 kind: Ingress @@ -694,7 +692,7 @@ CLIENT VALUES: client_address=('10.240.29.196', 56401) (10.240.29.196) ``` -Then head over to the GCE node with internal ip 10.240.29.196 and check that the [Service is functioning](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/debugging-services.md) as expected. Remember that the GCE L7 is routing you through the NodePort service, and try to trace back. +Then head over to the GCE node with internal IP 10.240.29.196 and check that the [Service is functioning](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/debugging-services.md) as expected. Remember that the GCE L7 is routing you through the NodePort service, and try to trace back. * Check if you can access the backend service directly via nodeip:nodeport * Check the GCE console @@ -732,12 +730,13 @@ $ kubectl get nodes | awk '{print $1}' | tail -n +2 | grep -Po 'gke-[0-9,a-z]+-[ For the curious, here is a high level overview of how the GCE LoadBalancer controller manages cloud resources. The controller manages cloud resources through a notion of pools. Each pool is the representation of the last known state of a logical cloud resource. Pools are periodically synced with the desired state, as reflected by the Kubernetes api. When you create a new Ingress, the following happens: -* Create BackendServices for each Kubernetes backend in the Ingress, through the backend pool. -* Add nodePorts for each BackendService to an Instance Group with all the instances in your cluster, through the instance pool. -* Create a UrlMap, TargetHttpProxy, Global Forwarding Rule through the loadbalancer pool. -* Update the loadbalancer's urlmap according to the Ingress. +* Updates instance groups to reflect all nodes in the cluster. +* Creates Backend Service for each Kubernetes service referenced in the ingress spec. +* Adds named-port for each Backend Service to each instance group. +* Creates a URL Map, TargetHttpProxy, and ForwardingRule. +* Updates the URL Map according to the Ingress. -Periodically, each pool checks that it has a valid connection to the next hop in the above resource graph. So for example, the backend pool will check that each backend is connected to the instance group and that the node ports match, the instance group will check that all the Kubernetes nodes are a part of the instance group, and so on. Since Backends are a limited resource, they're shared (well, everything is limited by your quota, this applies doubly to backend services). This means you can setup N Ingress' exposing M services through different paths and the controller will only create M backends. When all the Ingress' are deleted, the backend pool GCs the backend. +Periodically, each pool checks that it has a valid connection to the next hop in the above resource graph. So for example, the backend pool will check that each backend is connected to the instance group and that the node ports match, the instance group will check that all the Kubernetes nodes are a part of the instance group, and so on. Since Backend Services are a limited resource, they're shared (well, everything is limited by your quota, this applies doubly to Backend Services). This means you can setup N Ingress' exposing M services through different paths and the controller will only create M backends. When all the Ingress' are deleted, the backend pool GCs the backend. ## Wish list: diff --git a/docs/admin.md b/docs/admin.md index c40247bd9e..38e9339663 100644 --- a/docs/admin.md +++ b/docs/admin.md @@ -4,7 +4,7 @@ This is a guide to the different deployment styles of an Ingress controller. ## Vanillla deployments -__GCP__: On GCE/GKE, the Ingress controller runs on the +__GKE__: On GKE, the Ingress controller runs on the master. If you wish to stop this controller and run another instance on your nodes instead, you can do so by following this [example](/examples/deployment/gce). @@ -15,9 +15,6 @@ Please note that you must specify the `ingress.class` cloudprovider, or the cloudprovider controller will fight the nginx controller for the Ingress. -__AWS__: Until we have an AWS ALB Ingress controller, you can deploy the nginx -Ingress controller behind an ELB on AWS, as shows in the [next section](#stacked-deployments). - ## Stacked deployments __Behind a LoadBalancer Service__: You can deploy a generic controller behind a @@ -31,25 +28,6 @@ __Behind another Ingress__: Sometimes it is desirable to deploy a stack of Ingresses, like the GCE Ingress -> nginx Ingress -> application. You might want to do this because the GCE HTTP lb offers some features that the GCE network LB does not, like a global static IP or CDN, but doesn't offer all the -features of nginx, like url rewriting or redirects. - -TODO: Write an example - -## Daemonset - -Neither a single pod nor bank of generic controllers scale with the cluster size. -If you create a daemonset of generic Ingress controllers, every new node -automatically gets an instance of the controller listening on the specified -ports. +features of nginx, like URL rewriting or redirects. TODO: Write an example - -## Intra-cluster Ingress - -Since generic Ingress controllers run in pods, you can deploy them as intra-cluster -proxies by just not exposing them on a `hostPort` and putting them behind a -Service of `Type=ClusterIP`. - -TODO: Write an example - - diff --git a/docs/annotations.md b/docs/annotations.md index 7ae16b1b16..103403eb1e 100644 --- a/docs/annotations.md +++ b/docs/annotations.md @@ -38,7 +38,7 @@ Key: | `auth-tls-secret` | Name of secret for TLS client certification validation. | | nginx, haproxy | `auth-tls-verify-depth` | Maximum chain length of TLS client certificate. | | nginx | `auth-tls-error-page` | The page that user should be redirected in case of Auth error | | string -| `auth-satisfy` | Behaviour when more than one of `auth-type`, `auth-tls-secret` or `whitelist-source-range` are configured: `all` or `any`. | `all` | trafficserver | `trafficserver` +| `auth-satisfy` | Behavior when more than one of `auth-type`, `auth-tls-secret` or `whitelist-source-range` are configured: `all` or `any`. | `all` | trafficserver | `trafficserver` | `whitelist-source-range` | Comma-separate list of IP addresses to enable access to. | | nginx, haproxy, trafficserver ## URL related diff --git a/docs/catalog.md b/docs/catalog.md deleted file mode 100644 index 0d1a1a23af..0000000000 --- a/docs/catalog.md +++ /dev/null @@ -1,12 +0,0 @@ -# Ingress controller Catalog - -This is a non-comprehensive list of existing ingress controllers. - -* [Dummy controller backend](/examples/custom-controller) -* [HAProxy Ingress controller](https://github.com/jcmoraisjr/haproxy-ingress) -* [Linkerd](https://linkerd.io/config/0.9.1/linkerd/index.html#ingress-identifier) -* [traefik](https://docs.traefik.io/toml/#kubernetes-ingress-backend) -* [AWS Application Load Balancer Ingress Controller](https://github.com/coreos/alb-ingress-controller) -* [kube-ingress-aws-controller](https://github.com/zalando-incubator/kube-ingress-aws-controller) -* [Voyager: HAProxy Ingress Controller](https://github.com/appscode/voyager) -* [External Nginx Ingress Controller](https://github.com/unibet/ext_nginx) \ No newline at end of file diff --git a/docs/dev/README.md b/docs/dev/README.md index 968ffc3dad..4ab66f40bb 100644 --- a/docs/dev/README.md +++ b/docs/dev/README.md @@ -2,7 +2,7 @@ This directory is intended to be the canonical source of truth for things like writing and hacking on Ingress controllers. If you find a requirement that this -doc does not capture, please submit an issue on github. If you find other docs +doc does not capture, please submit an issue on GitHub. If you find other docs with references to requirements that are not simply links to this doc, please submit an issue. @@ -15,4 +15,3 @@ branch, but release branches of Kubernetes should not change. * [Build, test, release](getting-started.md) an existing controller * [Setup a cluster](setup-cluster.md) to hack at an existing controller * [Write your own](custom-controller.md) controller - diff --git a/docs/dev/custom-controller.md b/docs/dev/custom-controller.md deleted file mode 100644 index e3d7c94c2f..0000000000 --- a/docs/dev/custom-controller.md +++ /dev/null @@ -1,4 +0,0 @@ -# Writing Ingress controllers - -This doc outlines the basic steps needed to write an Ingress controller. -If you want the tl;dr version, skip straight to the [example](/examples/custom-controller). diff --git a/docs/dev/getting-started.md b/docs/dev/getting-started.md index 6c01d1170d..8489d734b8 100644 --- a/docs/dev/getting-started.md +++ b/docs/dev/getting-started.md @@ -7,7 +7,7 @@ It includes how to build, test, and release ingress controllers. The build uses dependencies in the `ingress/vendor` directory, which must be installed before building a binary/image. Occasionally, you -might need to update the dependencies. +might need to update the dependencies. This guide requires you to install the [godep](https://github.com/tools/godep) dependency tool. @@ -84,7 +84,7 @@ $ make docker-push TAG= PREFIX=$USER/ingress-controller ### GCE Controller -[TODO](https://github.com/kubernetes/ingress/issues/387): add instructions on building gce controller. +[TODO](https://github.com/kubernetes/ingress/issues/387): add instructions on building GCE controller. ## Deploying @@ -137,5 +137,3 @@ cherry-picked into a release branch. * If you're not confident about the stability of the code, [tag](https://help.github.com/articles/working-with-tags/) it as alpha or beta. Typically, a release branch should have stable code. - - diff --git a/docs/dev/setup-cluster.md b/docs/dev/setup-cluster.md index 06aa9a6301..c000d5a5ce 100644 --- a/docs/dev/setup-cluster.md +++ b/docs/dev/setup-cluster.md @@ -98,11 +98,11 @@ You can deploy an ingress controller on the cluster setup in the previous step ## Run against a remote cluster If the controller you're interested in using supports a "dry-run" flag, you can -run it on any machine that has `kubectl` access to a remote cluster. Eg: +run it on any machine that has `kubectl` access to a remote cluster. ```console $ cd $GOPATH/k8s.io/ingress/controllers/gce $ glbc --help - --running-in-cluster Optional, if this controller is running in a kubernetes cluster, use the + --running-in-cluster Optional, if this controller is running in a Kubernetes cluster, use the pod secrets for creating a Kubernetes client. (default true) $ ./glbc --running-in-cluster=false @@ -112,4 +112,3 @@ I1210 17:49:53.202149 27767 main.go:179] Starting GLBC image: glbc:0.9.2, clus Note that this is equivalent to running the ingress controller on your local machine, so if you already have an ingress controller running in the remote cluster, they will fight for the same ingress. - diff --git a/docs/faq/README.md b/docs/faq/README.md index 2c8e86bd3a..ec83244e1e 100644 --- a/docs/faq/README.md +++ b/docs/faq/README.md @@ -12,8 +12,6 @@ Table of Contents * [Are Ingress controllers namespaced?](#are-ingress-controllers-namespaced) * [How do I disable an Ingress controller?](#how-do-i-disable-an-ingress-controller) * [How do I run multiple Ingress controllers in the same cluster?](#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) -* [How do I contribute a backend to the generic Ingress controller?](#how-do-i-contribute-a-backend-to-the-generic-ingress-controller) -* [Is there a catalog of existing Ingress controllers?](#is-there-a-catalog-of-existing-ingress-controllers) * [How are the Ingress controllers tested?](#how-are-the-ingress-controllers-tested) * [An Ingress controller E2E is failing, what should I do?](#an-ingress-controller-e2e-is-failing-what-should-i-do) * [Is there a roadmap for Ingress features?](#is-there-a-roadmap-for-ingress-features) @@ -73,25 +71,11 @@ The GCE controller will only act on Ingresses with the annotation value of "gce" The nginx controller will only act on Ingresses with the annotation value of "nginx" or empty string "" (the default value if the annotation is omitted). -To completely stop the Ingress controller on GCE/GKE, please see [this](gce.md#how-do-i-disable-the-gce-ingress-controller) faq. +To completely stop the Ingress controller on GCE/GKE, please see [this](gce.md#how-do-i-disable-the-gce-ingress-controller) FAQ. ## How do I run multiple Ingress controllers in the same cluster? -Multiple Ingress controllers can co-exist and key off the `ingress.class` -annotation, as shown in this faq, as well as in [this](/examples/daemonset/nginx) example. - -## How do I contribute a backend to the generic Ingress controller? - -First check the [catalog](#is-there-a-catalog-of-existing-ingress-controllers), to make sure you really need to write one. - -1. Write a [generic backend](/examples/custom-controller) -2. Keep it in your own repo, make sure it passes the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/ingress_utils.go#L129) -3. Submit an example(s) in the appropriate subdirectories [here](/examples/README.md) -4. Add it to the catalog - -## Is there a catalog of existing Ingress controllers? - -Yes, a non-comprehensive [catalog](/docs/catalog.md) exists. +Multiple Ingress controllers can co-exist and key off the `ingress.class` annotation. ## How are the Ingress controllers tested? diff --git a/docs/faq/gce.md b/docs/faq/gce.md index 6f3ede8017..390d518d79 100644 --- a/docs/faq/gce.md +++ b/docs/faq/gce.md @@ -54,7 +54,7 @@ __Terminology:__ * [Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules): Manages the Ingress VIP * [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies): Manages SSL certs and proxies between the VIP and backend -* [Url Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map): Routing rules +* [URL Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map): Routing rules * [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service): Bridges various Instance Groups on a given Service NodePort * [Instance Group](https://cloud.google.com/compute/docs/instance-groups/): Collection of Kubernetes nodes @@ -66,7 +66,7 @@ Global Forwarding Rule -> TargetHTTPProxy Static IP URL Map - Backend Service(s) - Instance Group (us-central1) | / ... Global Forwarding Rule -> TargetHTTPSProxy - ssl cert + SSL cert ``` In addition to this pipeline: @@ -164,14 +164,14 @@ Yes, please see [this](/examples/static-ip) example. Yes, expect O(30s) delay. -The controller should create a second ssl certificate suffixed with `-1` and -atomically swap it with the ssl certificate in your taret proxy, then delete -the obselete ssl certificate. +The controller should create a second SSL certificate suffixed with `-1` and +atomically swap it with the SSL certificate in your target proxy, then delete +the obsolete SSL certificate. ## Can I tune the loadbalancing algorithm? -Right now, a kube-proxy nodePort is a necessary condition for Ingress on GCP. -This is because the cloud lb doesn't understand how to route directly to your +Right now, a kube-proxy NodePort service is a necessary condition for Ingress on GCP. +This is because the cloud LB doesn't understand how to route directly to your pods. Incorporating kube-proxy and cloud lb algorithms so they cooperate toward a common goal is still a work in progress. If you really want fine grained control over the algorithm, you should deploy the [nginx controller](/examples/deployment/nginx). @@ -258,13 +258,13 @@ NodePort Service * It's created when the first Ingress is created, and deleted when the last Ingress is deleted, since we don't want to waste quota if the user is not going to need L7 loadbalancing through Ingress -* It has a http health check pointing at `/healthz`, not the default `/`, because +* It has a HTTP health check pointing at `/healthz`, not the default `/`, because `/` serves a 404 by design ## How does Ingress work across 2 GCE clusters? -See federation [documentation](http://kubernetes.io/docs/user-guide/federation/federated-ingress/). +See kubemci [documentation](https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress). ## I shutdown a cluster without deleting all Ingresses, how do I manually cleanup? @@ -272,10 +272,10 @@ If you kill a cluster without first deleting Ingresses, the resources will leak. If you find yourself in such a situation, you can delete the resources by hand: 1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing" -2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID +2. Find the loadbalancer you'd like to delete, it should have a name formatted as: `k8s-um-ns-name--UUID` 3. Delete it, check the boxes to also cascade the deletion down to associated resources (eg: backend-services) 4. Switch to the "Compute Engine" tab, then choose "Instance Groups" -5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID +5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: `k8s-ig-UUID` We plan to fix this [soon](https://github.com/kubernetes/kubernetes/issues/16337). @@ -327,8 +327,8 @@ Shared: * Backend Services: because of low quota and high reuse. A single Service in a Kubernetes cluster has one NodePort, common throughout the cluster. GCE has -a hard limit of the number of allowed BackendServices, so if multiple Ingresses -all point to a single Service, that creates a single BackendService in GCE +a hard limit of the number of allowed Backend Services, so if multiple Ingresses +all point to a single Service, that creates a single Backend Service in GCE pointing to that Service's NodePort. * Instance Group: since an instance can only be part of a single loadbalanced @@ -336,18 +336,18 @@ Instance Group, these must be shared. There is 1 Ingress Instance Group per zone containing Kubernetes nodes. * Health Checks: currently the health checks point at the NodePort -of a BackendService. They don't *need* to be shared, but they are since -BackendServices are shared. +of a Backend Service. They don't *need* to be shared, but they are since +Backend Services are shared. -* Firewall rule: In a non-federated cluster there is a single firewall rule +* Firewall rule: There is a single firewall rule that covers health check traffic from the range of [GCE loadbalancer IPs](https://cloud.google.com/compute/docs/load-balancing/http/#troubleshooting) -to Service nodePorts. +to entire NodePort range. Unique: -Currently, a single Ingress on GCE creates a unique IP and url map. In this +Currently, a single Ingress on GCE creates a unique IP and URL Map. In this model the following resources cannot be shared: -* Url Map +* URL Map * Target HTTP(S) Proxies * SSL Certificates * Static-ip @@ -358,25 +358,25 @@ model the following resources cannot be shared: The most likely cause of a controller spin loop is some form of GCE validation failure, eg: -* It's trying to delete a BackendService already in use, say in a UrlMap -* It's trying to add an Instance to more than 1 loadbalanced InstanceGroups -* It's trying to flip the loadbalancing algorithm on a BackendService to RATE, -when some other BackendService is pointing at the same InstanceGroup and asking +* It's trying to delete a Backend Service already in use, say in a URL Map +* It's trying to add an Instance to more than 1 loadbalanced Instance Groups +* It's trying to flip the loadbalancing algorithm on a Backend Service to RATE, +when some other Backend Service is pointing at the same Instance Group and asking for UTILIZATION In all such cases, the work queue will put a single key (ingress namespace/name) -that's getting continuously requeued into exponential backoff. However, currently -the Informers that watch the Kubernetes api are setup to periodically resync, +that's getting continuously re-queued into exponential backoff. However, currently +the Informers that watch the Kubernetes API are setup to periodically resync, so even though a particular key is in backoff, we might end up syncing all other keys every, say, 10m, which might trigger the same validation-error-condition when syncing a shared resource. ## Creating an Internal Load Balancer without existing ingress **How the GCE ingress controller Works** -To assemble an L7 Load Balancer, the ingress controller creates an [unmanaged instance-group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances) named `k8s-ig--{UID}` and adds every known minion node to the group. For every service specified in all ingresses, a backend service is created to point to that instance group. +To assemble an L7 Load Balancer, the ingress controller creates an [unmanaged instance-group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances) named `k8s-ig--{UID}` and adds every known minion node to the group. For every service specified in all ingresses, a Backend Service is created to point to that instance group. **How the Internal Load Balancer Works** -K8s does not yet assemble ILB's for you, but you can manually create one via the GCP Console. The ILB is composed of a regional forwarding rule and a regional backend service. Similar to the L7 LB, the backend-service points to an unmanaged instance-group containing your K8s nodes. +K8s does not yet assemble ILB's for you, but you can manually create one via the GCP Console. The ILB is composed of a regional forwarding rule and a regional Backend Service. Similar to the L7 LB, the Backend Service points to an unmanaged instance-group containing your K8s nodes. **The Complication** GCP will only allow one load balanced unmanaged instance-group for a given instance. @@ -407,6 +407,6 @@ You can now follow the GCP Console wizard for creating an internal load balancer ## Can I use websockets? Yes! -The GCP HTTP(S) Load Balancer supports websockets. You do not need to change your http server or Kubernetes deployment. You will need to manually configure the created Backend Service's `timeout` setting. This value is the interpreted as the max connection duration. The default value of 30 seconds is probably too small for you. You can increase it to the supported maximum: 86400 (a day) through the GCP Console or the gcloud CLI. +The GCP HTTP(S) Load Balancer supports websockets. You do not need to change your HTTP server or Kubernetes deployment. You will need to manually configure the created Backend Service's `timeout` setting. This value is the interpreted as the max connection duration. The default value of 30 seconds is probably too small for your needs. You can increase it to the supported maximum: 86400 (a day) through the GCP Console or the gcloud CLI. View the [example](/controllers/gce/examples/websocket/). diff --git a/docs/testing.md b/docs/testing.md index e7cf1615ec..77cf8c7684 100644 --- a/docs/testing.md +++ b/docs/testing.md @@ -1,29 +1,29 @@ -# GLBC E2E Testing +# GLBC E2E Testing This document briefly goes over how the e2e testing is setup for this repository. It will also go into some detail on how you can run tests against your own cluster. ## Kubernetes CI Overview -There are two groups of e2e tests that run in Kubernetes CI. +There are two groups of e2e tests that run in Kubernetes CI. The first group uses a currently released image of GLBC when the test cluster is brought up. -The second group uses an image built directly from HEAD of this repository. -Currently, we run tests against an image from HEAD in GCE only. On the other hand, -tests that run against a release image use both GCE and GKE. +The second group uses an image built directly from HEAD of this repository. +Currently, we run tests against an image from HEAD in GCE only. On the other hand, +tests that run against a release image use both GCE and GKE. -Any test that starts with ingress-gce-* is a test which runs a image of GLBC from HEAD. +Any test that starts with ingress-gce-* is a test which runs a image of GLBC from HEAD. Any other test you see runs a release image of GLBC. Check out https://k8s-testgrid.appspot.com/sig-network-gce & https://k8s-testgrid.appspot.com/sig-network-gke for to see the results for these tests. -Every time a PR is merged to ingress-gce, Kubernetes test-infra triggers +Every time a PR is merged to ingress-gce, Kubernetes test-infra triggers a job that pushes a new image of GLBC for e2e testing. The ingress-gce-* jobs then use -this image when the test cluster is brought up. You can see the results of this job +this image when the test cluster is brought up. You can see the results of this job at https://k8s-testgrid.appspot.com/sig-network-gce#ingress-gce-image-push. ## Manual Testing -If you are fixing a bug or writing a new feature and want to test your changes before they +If you are fixing a bug or writing a new feature and want to test your changes before they are run through CI, then you will most likely want to test your changes end-to-end before submitting: 1. ingress-gce-e2e: @@ -40,7 +40,7 @@ are run through CI, then you will most likely want to test your changes end-to-e **Disclaimer:** - + Note that the cluster you create should have permission to pull images from the registry -you are using to push the image to. You can either make your registry publically readable or give explicit permission -to your cluster's project service account. +you are using to push the image to. You can either make your registry publicly readable or give explicit permission +to your cluster's project service account. diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 71d9505d39..cb53d93cc8 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -27,14 +27,14 @@ Both authentications must work: __Service authentication__ -The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: +The Ingress controller needs information from the Kubernetes API Server. Therefore, authentication is required, which can be achieved in two different ways: 1. _Service Account:_ This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. 2. _Kubeconfig file:_ In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the `--kubeconfig` flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the `--kubeconfig` does not requires the flag `--apiserver-host`. The format of the file is identical to `~/.kube/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. -3. _Using the flag `--apiserver-host`:_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecure api server or reach a remote kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/). +3. _Using the flag `--apiserver-host`:_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecure API server or reach a remote Kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/). Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser diff --git a/examples/deployment/README.md b/examples/deployment/README.md index 2294789177..c428be4d56 100644 --- a/examples/deployment/README.md +++ b/examples/deployment/README.md @@ -4,7 +4,7 @@ This example demonstrates the deployment of a GCE Ingress controller. Note: __all GCE/GKE clusters already have an Ingress controller running on the master. The only reason to deploy another GCE controller is if you want -to debug or otherwise observe its operation (eg via kubectl logs).__ +to debug or otherwise observe its operation via logs.__ __Before deploying another one in your cluster, make sure you disable the master controller.__ diff --git a/examples/tls-termination/README.md b/examples/tls-termination/README.md index bc1ea1c6be..8a711a2bf1 100644 --- a/examples/tls-termination/README.md +++ b/examples/tls-termination/README.md @@ -12,7 +12,7 @@ and that you have an ingress controller [running](/examples/deployment) in your ## Deployment The following command instructs the controller to terminate traffic using -the provided TLS cert, and forward un-encrypted HTTP traffic to the test +the provided TLS cert, and forward unencrypted HTTP traffic to the test HTTP service. ```console diff --git a/examples/websocket/README.md b/examples/websocket/README.md index 0198cf6fab..7735af94b0 100644 --- a/examples/websocket/README.md +++ b/examples/websocket/README.md @@ -1,6 +1,6 @@ # Simple Websocket Example -Any websocket server will suffice; however, for the purpose of demonstration, we'll use the gorilla/websocket package in a Go process. +Any websocket server will suffice; however, for the purpose of demonstration, we'll use the gorilla/websocket package in a Go binary. ### Build ```shell diff --git a/pkg/tls/tls.go b/pkg/tls/tls.go index 3c503c3f5b..f7413140b7 100644 --- a/pkg/tls/tls.go +++ b/pkg/tls/tls.go @@ -86,7 +86,7 @@ func (t *TLSCertsFromSecretsLoader) Load(ing *extensions.Ingress) (*loadbalancer // TODO: Add support for file loading so we can support HTTPS default backends. -// fakeTLSSecretLoader fakes out TLS loading. +// FakeTLSSecretLoader fakes out TLS loading. type FakeTLSSecretLoader struct { noOPValidator FakeCerts map[string]*loadbalancers.TLSCerts