Skip to content

Latest commit

 

History

History
146 lines (123 loc) · 5.28 KB

11-provisioning-tcp-cloud-loadbalancers.adoc

File metadata and controls

146 lines (123 loc) · 5.28 KB

Provisioning TCP Cloud Load-Balancers

Introduction

EdgeLB supports exposing Kubernetes services of type LoadBalancer via cloud providers' load-balancing solution. Service resources may be annotated so that dklb instructs EdgeLB to enable this feature for the respective pools. As of this writing, only AWS ELB, and more specifically NLB, is supported by EdgeLB, and therefore dklb.

Instructions

Specifying the configuration for the cloud load-balancer

dklb must read the raw, JSON-encoded cloud load-balancer’s configuration from the .cloudProviderConfiguration field of the configuration object.

kubernetes.dcos.io/dklb-config: |  # NOTE: The "|" character is mandatory.
  cloudProviderConfiguration: |  # NOTE: The "|" character is mandatory.
    {
      "aws": {...}
    }

When said field is specified in the configuration object for a Service resource, dklb creates a dedicated EdgeLB pool for the Service resource. This EdgeLB pool is called cloud--<cluster-name>--<suffix>, where <suffix> is a randomly-generated five-character string.

⚠️

The fact that dklb creates a new, dedicated EdgeLB pool means that any EdgeLB pool previously being used to expose the Service resource will be left untouched from the moment the .cloudProviderConfiguration field is specified. The user is responsible for removing said EdgeLB pool manually using the EdgeLB CLI.

⚠️

When the .cloudProviderConfiguration field is specified, the .network and .frontends fields of the configuration object are ignored. The .role, .cpus, .memory and .size fields are still observed and may be tweaked as required.

Connecting to the cloud load-balancer

After the .cloudProviderConfiguration field is specified on the configuration object, dklb will instruct EdgeLB to create a cloud load-balancer according to the provided configuration. The hostname that should be used to connect to the cloud load-balancer will usually be reported shortly after in the .status field of the Service resource.

Configuration

As mentioned above, the configuration for the cloud load-balancer must be specified via the .cloudProviderConfiguration field in raw, JSON-encoded format: The actual contents of the field depend on the cloud provider being used and are defined by EdgeLB. Please, read EdgeLB pool configuration documentation for more detailed information.

AWS NLB

To configure an AWS NLB, the value of the .cloudProviderConfiguration field must be a JSON object obeying the following structure:

{
  "aws": {
    "elbs": [{
      "type": "NLB",
      "name": "<name>",
      "internal": <internal>,
      "subnets": [
        <subnet-1>,
        <subnet-2>,
        (...),
        <subnet-N>
      ],
      "listeners": [
        {
          "port": <service-port-1>,
          "linkFrontend": "<cluster-name>:<service-namespace>:<service-name>:<service-port-1>"
        },
        {
          "port": <service-port-2>,
          "linkFrontend": "<cluster-name>:<service-namespace>:<service-name>:<service-port-2>"
        },
        (...),
        {
          "port": <service-port-M>,
          "linkFrontend": "<cluster-name>:<service-namespace>:<service-name>:<service-port-M>"
        }
       ]
    }]
  }
}

In the snippet above, placeholders must be replaced according to the following table:

Placeholder

Meaning

<name>

The desired name for the Network Load-Balancer.

<internal>

Boolean value (i.e. true or false) indicating whether the NLB should be exposed internally only.

<subnet-X>

ID of a subnet which the NLB should join.

<service-port-X>

The service port that should be exposed via the NLB.

<cluster-name>

The name of the MKE cluster to which the current Service resource belongs, having any forward slashes replaced by dots.

<service-namespace>

The name of the Kubernetes namespace in which the current Service resource exists.

<service-name>

The name of the current Service resource.

Example

To expose the redis service created in the previous example using AWS NLB, and assuming the name of the MKE cluster is dev/kubernetes01, the following value for the kubernetes.dcos.io/dklb-config may be used:

kubernetes.dcos.io/dklb-config: |
  cloudProviderConfiguration: |
    {
        "aws": {
            "elb": [{
                "type": "NLB",
                "name": "redis-nlb",
                "internal": false,
                "subnets": [
                  "subnet-07a3022372ce71ad4"
                ],
                "listeners": [{
                  "port": 6379,
                  "linkFrontend": "dev.kubernetes01:default:redis:6379"
                }]
            }]
        }
    }
  cpus: 0.2
  memory: 512
  size: 3