Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Azure Load Balancer - Public IP Limitation #737

Closed
lachie83 opened this issue Jun 7, 2017 · 18 comments
Closed

Azure Load Balancer - Public IP Limitation #737

lachie83 opened this issue Jun 7, 2017 · 18 comments

Comments

@lachie83
Copy link
Member

lachie83 commented Jun 7, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)

Kubernetes

What happened:

When a Kubernetes Service with type=LoadBalancer is created, it allocates a separate Public IP Address on the Azure Load Balancer. We have hit a limitation where one can only attach up to 10 Public IPs on a single Azure Load Balancer and we're getting the error below:

Error creating load balancer (will retry): Failed to create load balancer for service
default/web-ip: network.LoadBalancersClient#CreateOrUpdate: Failure responding to request:
StatusCode=400 -- Original Error: autorest/azure: Service returned an error. Status=400
Code="LoadBalancerFrontendIPConfigurationCountLimitReached" Message="A load balancer cannot
have more than 10 FrontendIPConfigurations." Details=[]

What you expected to happen:

Kubernetes does not limit us on the number of services we can put online. In a microservice architecture, we also believe that 10 services is a little too low.
What we'd like to suggest is that the ACS Engine should be intelligent enough that when the limit has been reached, it could put up another Azure Load Balancer where new services can be exposed.

How to reproduce it (as minimally and precisely as possible):

Create 11 services of type load balancer.

Anything else we need to know:

@benjsicam
Copy link

+1

@anhowe
Copy link
Contributor

anhowe commented Jun 23, 2017

This is an azure platform limitation per load balancer, and you can only have one load balancer per availability set.

@Knappek
Copy link

Knappek commented Jun 29, 2017

@anhowe but using only one availability set in a Kubernetes cluster does not really make sense, does it?

@anhowe
Copy link
Contributor

anhowe commented Jul 3, 2017

@Knappek Can you provide more of a description of what doesn't make sense?

Maybe you are suggesting that one potential solution is for the cloud provider to understand how many agent pools exist, and check for an available slot on each agent pool. So if you add 10 agent pools, you can potentially have 10 LBs, and 100 external IPs. Note that you will need to use at least 1 VM per agent pool.

@Knappek
Copy link

Knappek commented Jul 4, 2017

@anhowe yes that is what I would expect, but it sounds not trivial to me to have such a setting as same kind of pods need to be deployed in the same AS and if a pod of a replica set stops working the new pod needs to get deployed in the same AS.
Is it possible to configure acs-engine to create multiple agent pools with multiple LBs? Furthermore, is it easy to replace the availability sets with scale sets? This does make more sense to me to have for the workers, doesn't it?

@snebel29
Copy link

snebel29 commented Aug 3, 2017

Hi,
Some feelings here, when I have hit the same limit on public IP's I've asked it to get increased by the Azure support guys... after approval from the network team I got 10 more Ips, I'm not sure how many you could potentially ask for but this migth be quite a big number to have production ready acs clusters with kubernetes in some non trivial environment, at lest if all of those services are open to the internet and expose the same port.

For a microservice architecture where those service are consumed internally you could use only ClusterIP type and access them using the internal local cluster names.

I agree this limitation may feels not optimal if you're planning to build a micro-service architecture with more than a few tens of services per cluster using all of them the same external port and directly exposed to the internet.

For the near future I'm considering to start using routing L7 using ingress controller with something like nginx/haproxy, but when you do so you start not having an out of the box cluster and start building a more custom stuff.

This is only some personal opinions, I hope those are useful for someone.

Thanks!

@UncleTawnos
Copy link
Contributor

the hard limit i 100, so asking support is good way to start. However I found it useful in some scenarios to have separate LB per agentpool

@4c74356b41
Copy link

is this going anywhere? this is pretty ridiculous if you ask me.

@khenidak
Copy link
Contributor

Update:
pre 1.9 - kubernetes can only utilize the primary Availability Set LB. Even if the cluster has more availability sets (each availability -set gets load balancer).

1.9++ - kubernetes can utilize all the load balancers (from all availability-sets) in the cluster. The behavior is described here https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure_loadbalancer.md using the auto flag auto assigns the IP to the least utilized LB.

@4c74356b41
Copy link

yeah, this is cool, but its pointless. I cant assign ip to another load balancer if I need it on this load balancer.

@khenidak
Copy link
Contributor

khenidak commented Mar 15, 2018 via email

@4c74356b41
Copy link

yes, i need 11 ips on the load balancer.

@UncleTawnos
Copy link
Contributor

@4c74356b41 there is a limit of "public IPs per load balancer" in Azure. The limit is 10. You should go to portal, check "Activity Log" of resource group your cluster is deployed to and you will notice an error. Just write to support and they will easily increase the limit to 20, 30 or even 100 if required.

Hope that solves your problem as it has nothing to to with ACS-e - it's just azure design.

The separate case is 1.9++ behavior as you really should have a LB per availability set due to things like number of active connections (below 1.9 everything is forwarded via first agent pool)

@khenidak
Copy link
Contributor

khenidak commented Apr 2, 2018

we are adding premium LB support for v 1.11 @feiskyer is working on it. Should solve for this.

@acesyde
Copy link

acesyde commented Jul 18, 2018

Any news about this subject ?

@feiskyer
Copy link
Member

Standard load balancer has been supported since v1.11.0, but it's not configurable via acs-engine yet. See #3468 for tracking the process.

@EriksonBahr
Copy link

Seems like it was fixed in the #3468 but I still don't see it in the az acs create command.

Was someone able to benefit from the fix?

@stale
Copy link

stale bot commented Mar 9, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution. Note that acs-engine is deprecated--see https://github.com/Azure/aks-engine instead.

@stale stale bot added the stale label Mar 9, 2019
@stale stale bot closed this as completed Mar 16, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests