Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defaults per cluster or project? #632

Closed
rodlogic opened this issue Feb 9, 2019 · 16 comments
Closed

Defaults per cluster or project? #632

rodlogic opened this issue Feb 9, 2019 · 16 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@rodlogic
Copy link

rodlogic commented Feb 9, 2019

Is there a way to provide a default TLS certificate or set of certificates when creating a LB that is not at the Ingress or Service level as an annotation?

I would like to provision one TLS certificate and one DNS name for the cluster, but I don't want to have to specify the TLS cert annotation on every ingress/service. If the ingress specifies a host that is not part of the cert they app owner is out of luck and should either correct the ingress or deploy it elsewhere.

@rramkumar1
Copy link
Contributor

@rodlogic Is there some Kubernetes tooling that can help you with this? I'm not too familiar helm and terraform but would it be worth taking a look at those?

I'm not sure there is much we can do here to solve this problem. Unless you have any other questions, I'll let you close this issue.

@bowei
Copy link
Member

bowei commented Feb 11, 2019

This kind of thing sounds like something best left to an external tool.

@rodlogic
Copy link
Author

I guess I am trying to avoid the overhead of each group/team knowing what the underlying Managed TLS certificate id is when deploying their own workloads. It is just unnecessary overhead.

I would like to have a cluster be serving a workloads for a single DNS name and single TLS certificate that could be provisioned along side with the cluster and be picked up automatically by the ingress controller as a default when the Ingress DNS name matches and HTTPS is enabled.

Either ingress controller command line arguments or even a default ConfigMap that is used in case the Ingress doesn't provide the required annotations.

@rodlogic
Copy link
Author

FYI: A similar issue was created on the AWS ALB Ingress Controller: kubernetes-sigs/aws-load-balancer-controller#746

@rramkumar1
Copy link
Contributor

rramkumar1 commented Feb 13, 2019

@rodlogic This might be a terrible idea so bear with me...

What if you stored the cluster-wide TLS cert name or secret in a config map and when a group/team creates an Ingress, you have a mutating admission webhook [1] that then edits the Ingress object with that cert name / secret. In theory, this should free your teams from having to know about the TLS information and you can easily switch out the config by modifying the config map/

[1] https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74

@bowei
Copy link
Member

bowei commented Feb 13, 2019

You can use a webhook for sync modifications or just have a controller watch and modify YAMLs asynchronously.

@rodlogic
Copy link
Author

Hi @rramkumar1, thanks for the alternative!

However, I am not sure I follow why this might be a terrible idea? I come with the assumption that this use case is more common than just an outlier (at least it has been also a request to the alb ingress controller and well accepted there), and considering the cost of a custom created, deployed and maintained mutating admission webhook multiplied by the number of companies/projects that will have the same need, it seems to me that this feature pays for itself with a very small cost to the ingress-gce project.

I may be wrong, but I also think that being able to have infrastructure specific defaults for other similar annotations, will also be very valuable.

@bowei
Copy link
Member

bowei commented Feb 15, 2019

This sounds like something that would go into an IngressClass resource? We should try to sketch out a generic design, this will not be the only common infra feature. Can you link the ALB feature request.

@rodlogic
Copy link
Author

This is discussion is also interesting.

More specifically, this proposal and what it enables:

Same. We have multiple environments (namespaces) in the same cluster and auto-detection via tags is how we assign DNS names for the services in each environment (via Route53/ExternalDNS). It would be great if something similar exists to support automatically assigning certs from ACM to the ALB so that we don't have to maintain a deployment spec with a unique cert ARN for each environment.

@rodlogic
Copy link
Author

rodlogic commented Feb 18, 2019

Hi @bowei, this is the similar ALB feature request aside from the additional one I posted just now.

@bowei
Copy link
Member

bowei commented Feb 19, 2019

@rodlogic -- your link doesn't seem to work?

@rodlogic
Copy link
Author

@bowei I just fixed it, sorry.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 20, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants