-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GLBC] Expose GCE backend parameters in Ingress object API #243
Comments
This will need to be a per Service configuration since currently ingresses share the backend for a given nodeport, so it makes more sense to specify it as an annotation on the Service. Basically it would be nice if the Service author could publish some timeouts for their Service, and any/all loadbalancers fronting the Service will respect these settings. |
The reason for it to be an annotation is that we're not ready to add it to every implementation of Services, yet. Maybe never. I would suggest something like |
I think I understand the reasoning for an annotation, thanks. Attempting to tackle this, I got this far.
|
Following up on the discussion on the CL, I'd appreciate more thoughts from more contributors on the subject of a generic timeout annotation vs. a GCLB-specific one. The issue is whether to take the generic path, with something like WDYT? @thockin feel free to tag specific contributors :-) |
What about using the |
Oops, forgot that liveness/readiness probes don't live in the service. We would have to look up a pod under the service selector and check, similar to what we do for getting the health check request path. |
Many folks have expressed interest on this, so I'd like to keep the ball rolling. After reading the discussion on the CL, I'm also inclined to go with a generic annotation on Service. As I mentioned above, an easy option would be to look at a probe's configuration. |
Ae we happy with a single timeout or do we need one per host/path (Service)
?
…On Wed, Apr 5, 2017 at 2:19 PM, Nick Sardo ***@***.***> wrote:
Many folks have expressed interest on this, so I'd like to keep the ball
rolling.
After reading the discussion on the CL, I'm also inclined to go with a
generic annotation on Service. As I mentioned above, an easy option would
be to look at a probe's configuration.
Any more thoughts @itamaro <https://github.com/itamaro> and @thockin
<https://github.com/thockin> ?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#243 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVJoulQLHjgirXdc63bb-b0ek6FdRks5rtAVfgaJpZM4L5Ss8>
.
|
If we went with a single timeout, a user is bound to come along with a use case for multiple. Their argument will be that GCP supports different timeouts - the controller should support that feature too. |
Is it generic, then? Or can things like nginx reasonably implement this
per host/path ? I forget - did we consider a field?
…On Wed, Apr 5, 2017 at 2:46 PM, Nick Sardo ***@***.***> wrote:
If we went with a single timeout, a user is bound to come along with a use
case for multiple. Their argument will be that GCP supports different
timeouts - the controller should support that feature too.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#243 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVNGirY0-8-nrWaK1gRQx8cluCo3gks5rtAvKgaJpZM4L5Ss8>
.
|
Pinging @aledbf for thoughts on having this for Tim, you mentioned possibly going straight to field but the question of being generic had to be answered first. |
yeah - if it is generic, let's run with fields, if we can.
…On Wed, Apr 5, 2017 at 3:05 PM, Nick Sardo ***@***.***> wrote:
Pinging @aledbf <https://github.com/aledbf> for thoughts on having this
for proxy_read_timeout
Tim, you mentioned possibly going straight to field but the question of
being generic had to be answered first.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#243 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVJTgmbrbSRT-klfEuZaIY0YJe_zAks5rtBAmgaJpZM4L5Ss8>
.
|
in nginx we have 3 settings related to timeouts with predefined defaults and annotations in the ingress that allow custom values
(this settings are used to send the request to a different endpoint if there's more than one) From what I've seen this default are ok unless you are running something like a docker registry or exposing a service for file upload. |
My use case is a phpmyadmin, for which the default 30s GCLB timeout is not enough, we'd like something on the order of 10 minutes. Re: "which object should be annotated", I'm leaning towards "I don't really care as long as it works ASAP", since my current need is so unsophisticated :-) With my sysadmin hat on, it feels like it should be on the ingress, since there are a bunch of possible timeout parameters (as nginx example shows) and it's just more natural to think about some of them in the context of the ingress (being an abstraction of an LB). I also imagine that in a larger organization one team might own the deployment+service(s), and another might own the ingress(es). Since a single service might be fronted by different ingresses, with different needs, and therefore timeouts (e.g. one for internal use and another exposed to the external users), it also would make sense to specify the timeouts on the ingress rather than service. As such, the fact that ingresses share backends seems like an artificial restrictions. But these are just my conjectures, I don't know how large organizations in fact use k8s. |
For clarity there are two questions are being discussed before proceeding...
Hybrid solution?: Support description on service with an optional override-per-service annotation on ingress. (possibly overkill) |
My answers to the 2 questions formulated by @nicksardo:
service-specific.
specified on the Service object. also, in relation to:
it's another topic that got me confused - since a service selector can match multiple pods, not all having the same health check specification necessarily, the existing behavior looks odd. it would seem more reasonable to have another health check specification at the service level, no? (off topic?) back to the timeout definition:
WDYT? where do we stand about this issue? |
@itamaro thanks for providing your feedback. Regarding the first question, I agree that timeouts should be backend-service specific. We should put this question to rest. For the second question, I like your response to the problem of having a service with varied expectations of timeouts. I agree that the owner of a service should be the most qualified to reason about what timeouts are most appropriate. However, I have a hard time getting past the use case of having a file-upload feature in a web service. From the standpoint of the service owner, they will say the timeout "depends" on what path you're talking about. Or they might give an umbrella "X seconds" with X being the longest expected timeout. This Ingress-Service dilemma has existed for awhile and doesn't seem to have a right answer. Two proposals, "Better Ingress" and "Composite Services", would seem to help this situation if one of them were implemented. Since we don't currently have a way to express HTTP paths/attributes on a service, I'm leaning towards noting the timeout on the ingress object where we do have paths defined. Since nginx has an ingress-wide timeout setting, I also believe that this annotation should be GCP specific. Thoughts/comments? |
On Mon, Apr 24, 2017 at 2:12 PM, Nick Sardo ***@***.***> wrote:
@itamaro thanks for providing your feedback.
Regarding the first question, I agree that timeouts should be backend-service specific. We should put this question to rest.
For the second question, I like your response to the problem of having a service with varied expectations of timeouts. I agree that the owner of a service should be the most qualified to reason about what timeouts are most appropriate. However, I have a hard time getting past the use case of having a file-upload feature in a web service. From the standpoint of the service owner, they will say the timeout "depends" on what path you're talking about or give. Or they might give an umbrella "X seconds" with X being the longest expected timeout. This Ingress-Service dilemma has existed for awhile and doesn't seem to have a right answer. Two proposals, "Better Ingress" and "Composite Services", would seem to help this situation if one of them were implemented. Since we don't currently have a way to express HTTP paths/attributes on a service, I'm leaning towards noting the timeout on the ingress object where we do have paths defined. Since nginx has an ingress-wide timeout setting, I also believe that this annotation should GCP specific. Thoughts/comments?
I think I agree with Nick's assessment.
|
well, not sure I completely follow all the reasoning. anyway, I digress... I trust that you've seen more diverse use-cases, and you can choose the best tradeoff for this. I prefer an implemented good enough solution over a theoretically perfect one :-) |
My feeling with timeouts is that they are part of the "how to use" rather
that "what it is". Reasonable minds can disagree -j agree that a working
solution trumps all.
…On Apr 26, 2017 7:05 AM, "Itamar Ostricher" ***@***.***> wrote:
For the second question, I like your response to the problem of having a
service with varied expectations of timeouts. I agree that the owner of a
service should be the most qualified to reason about what timeouts are most
appropriate. However, I have a hard time getting past the use case of
having a file-upload feature in a web service. From the standpoint of the
service owner, they will say the timeout "depends" on what path you're
talking about. Or they might give an umbrella "X seconds" with X being the
longest expected timeout. This Ingress-Service dilemma has existed for
awhile and doesn't seem to have a right answer. Two proposals, "Better
Ingress" and "Composite Services", would seem to help this situation if one
of them were implemented. Since we don't currently have a way to express
HTTP paths/attributes on a service, I'm leaning towards noting the timeout
on the ingress object where we do have paths defined. Since nginx has an
ingress-wide timeout setting, I also believe that this annotation should be
GCP specific. Thoughts/comments?
well, not sure I completely follow all the reasoning.
naturally, a service with multiple endpoints has variance in expected
latency.
as a service owner myself, specifically of services that have sub-second
endpoints alongside ~minute endpoints, I am familiar with the associated
pains. I think that even if I had a way to express timeouts per path, I'd
still rather specify the maximal timeout for the entire service, for KISS
considerations.
another possible argument - why stop at the "per path" level? even on a
given path, different requests (authenticated / anonymous), with different
types (POST / GET / HEAD / OPTIONS), with different request headers &
parameters - can have widely different timeouts. why not define a protocol
between the load balancer and the service that determines the
request-specific timeout given the request metadata?
anyway, I digress... I trust that you've seen more diverse use-cases, and
you can choose the best tradeoff for this. I prefer an implemented good
enough solution over a theoretically perfect one :-)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#243 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVH7MO2g5X6KefEBlXOCtDGELHN4lks5rz08jgaJpZM4L5Ss8>
.
|
For the time being is there any work around where I can manually change the timeout in the google console and not have the controller revert the change? |
This is exactly what I've done and it seems to work? I have a test where I check it periodically. It seems to have kept its settings for at least a month now. I am a bit scared about the next time I make any change to the ingress of course :) |
@evanj what is it you did? |
@tsloughter I edited the load balancer's backend timeout through the Google Cloud Console web UI. I changed the timeout from 30s to 10m and it is still working, about 1 month later. |
@evanj oh, hm, ok, I assumed you meant something else because I had tried that but the timeout seemed to revert to 30s after a short period of time. I'll try again, thanks! |
@tsloughter The ingress controller does not update the timeout value. However, if you change ports of your service and a new nodeport is generated, the backend service will be regenerated. |
Hi guys and gals -- what's the current status here? Do we expect this feature will make it into a future release? Any idea on time frame? Cheers |
That syntax looks great @nicksardo. I wonder if it would be possible to include default settings that would be applied to every service in the ingress? For example, the following could be really useful on an ingress that exposes only IAP-backed services as a way to ensure that services aren't accidentally exposed without IAP: "iap": { "enabled": true } Our typical use case would be that of an ingress where IAP is enabled with the same OAuth 2 ID for every service other than the default service. Having settings automatically applied would go a long way to keep the annotation from becoming overly verbose. The naming could get confusing because of the default backend; perhaps a secondary annotation would make sense? |
@abstrctn: Would it be sufficient if we applied GCE backend settings per serviceName:servicePort rather than by path ("foo.bar.com/foo")? |
Comment #243 (comment) illustrated a "file-upload" use case that might be benefited by path level configuration. +1 for exposing config on the ingress object with path level configuration, seems flexible enough in most use cases and align with the ingress object definition (defining rules on host/paths), combined with the default config proposed by #243 (comment). |
@MrHohn: I ask from the perspective of making the annotations less brittle. It's unfortunate to have to duplicate host+path in an annotation when the setting really pertains to what's in the spec. The ingress resource really needs to be more composable. |
@nicksardo I love this idea. The two things I'm most looking forward to are |
@csbell It seems important to keep the settings configurable by path, since they could point to entirely different services (e.g. However, if a goal is composability, would it make more sense to key settings off the serviceName? My understanding with GCP is that these settings correspond to the Backend Service created for each Type=nodePort service in kubernetes. So in some ways it wouldn't make sense to need to repeat that configuration if a service is exposed over multiple hostname (that would also get confusing, because it wouldn't be clear which settings should take priority).
In that case, since |
Are there any workarounds for this currently? Specifically, configuring GCP HTTPS LB timeouts via Ingress configs? |
@fotinakis The work around I am using is to manually configure the GCP timeout. It seems to persist unless you make a change to the ingress that deletes the backend, although I haven't carefully experimented to figure out exactly what changes will persist it. In my case, we extremely rarely need to change the ingress, so we just manually check that the timeout has not gone away. See #243 (comment) |
I am also setting up manually. But anytime (rare though) we had to change ingress, we needed to modify this value manually. And yes, we ha e missed couple of time. Is there anyway we can automate this? |
I see IAP is no longer in beta — we're using the bit.ly oauth proxy for now, but would love not to have to :). |
Trying to recap the current state of this issue since #243 (comment). It seems like there aren't any objections to the following (but please correct me if I'm mistaken):
Which leaves some open questions:
|
I don't know if I'm exactly the most qualified person to answer any of these open questions. But I would like to see this move forward, so I will try and help any way I can. My thoughts on: 1 & 2. I like the direction the new key style is headed. I think if we let people specify defaults, we should let them specify defaults at an ingress and service level:
Default values in services should override default values at the ingress level. Instead of the Edited in seconds later:
|
Actually, the UID in the backend service name is used for the entire cluster - not per ingress. Therefore, having these settings on the service key is also "not possible" in the current implementation if multiple ingresses reference the same service(s). For either direction, changes must be done to the naming scheme, which is why this hasn't been implemented. I'm worried that if we go in the wrong direction, it'll require more naming scheme changes and another painful migration. I believe we should consider what is the ideal ingress spec and model this annotation accordingly. Assuming an ingress spec could be GCP specific, how would these settings be best expressed? Embedded into the host/path->service/port mapping? apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
backend:
serviceName: s0
servicePort: 80
enableCDN: true
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
timeout: 200
- path: /bar/*
backend:
serviceName: s2
servicePort: 80
iap:
enabled: true Or keep the settings separate? apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
params:
s0:
80:
enableCDN: true
s1:
80:
timeout: 200
s2:
80:
iap:
enabled: true
backend:
serviceName: s0
servicePort: 80
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
timeout: 200
- path: /bar/*
backend:
serviceName: s2
servicePort: 80
iap:
enabled: true To understand what works best there, I think we need to get a better idea of how people build their services. Do users usually serve everything from one service-port? Static content, file uploads, IAP-protected content, public content, etc? Or do they split them out into separate services? It seems that the choices made here will greatly influence how services are formed. Considering that setting up an ingress comes after setting up services, first-time GCP ingress users may have to re-structure their services & deployments just to make these ingress usable. |
@nicksardo I think your examples have the same problems than path expressions across implementations #555 |
Right. We shouldn't keep the annotation long term. Assume GCP-ingress had its own CRD, what would we want it to look like? |
Would it be possible to agree on a common denominator and then allow extension per implementation? This would also move us away from the annotations as well. apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
type:
gclb: # reserved for GCLB, this is just an example
services:
- serviceName: s1
servicePort: 80
iap: true
cdn: false
nginx: # reserved for NGINX
backend:
serviceName: s0
servicePort: 80
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80 |
Sorry, I didn't mean to start a discussion on how GCP-specific spec is possible. I only wanted to ask how would the settings be surfaced. |
Could be CRD the answer to customization? Like assuming one CRD per ingress (with the same namespace and name). If such CRD exists it contains the customization for the rule? |
Exposing even just
Within the context of Ingress, where configuration is nested under hosts and paths, putting any additional configuration into the spec creates a concept of a "backend" different from a Service. Maybe that's something we want to do as part of Ingress cross platform? It would certainly make configuration easier if it could be added inline to the rules array. But that's a big change, and doesn't seem necessary when the majority of simple use cases probably don't need that flexibility, and it's achievable by adding additional services. What if we had a custom resource definition that was a literal representation of the GCP Backend Service within Kubernetes? apiVersion: cloud.google.com/v1
kind: BackendService
metadata:
name: k8s-be-32015--69443475d3140027
spec:
enableCDN: false
timeoutSec: 30
iap:
enabled: true
oauth2ClientId: ... Or something that more closely mirrored an individual Service, with multiple potential backends: apiVersion: cloud.google.com/v1
kind: BackendService
metadata:
name: s0
spec:
ports:
- port: 80 # maybe just "port" or "name"
name: http
enableCDN: false
timeoutSec: 30
iap:
enabled: true
oauth2ClientId: ...
- port: 81
enableCDN: true |
@bowei @nicksardo are there any docs for this API/implement IAP with |
IAP is a GCP product, so I don't think it makes sense to expose this on NGINX. You probably want to file a feature request for the NGINX-specific equilvalent? |
@bowei I was just trying to ascertain what the resolution was here, as other issues discussing IAP support point to it #421 (comment). I think I understand the history now – this discussion is about the GCE Ingress API, which was moved to a separate repo on Oct 11 2017. I was confused and thought it was about exposing the such an API on Thanks for the quick response! |
When using GCE Ingress controller, the GCE Ingress controller (GLBC) provisions GCE backends with a bunch of default parameters.
It would be great if it was possible to tweak the parameters that are currently "untweakable" from the Ingress object API (AKA from my YAML's).
Specific use case: GCE backends are provisioned with a default timeout of 30 seconds, which is not sufficient for some long requests. I'd like to be able to control the timeout per-backend.
The text was updated successfully, but these errors were encountered: