-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCE health check does not pick up changes to pod readinessProbe #39
Comments
From @ConradIrwin on April 11, 2017 4:22 Comment by @nicksardo: There's actually a TODO for deciding what to do in this case https://github.com/kubernetes/ingress/blob/6ed6475e87588494dd29ba5d33bac167ae8acfdd/controllers/gce/healthchecks/healthchecks.go#L71 The controller doesn't want to overwrite handmade changes. We could probably update the settings if they have the default value. |
From @porridge on April 11, 2017 7:45 But updating from default value would lead to confusing behavior where only I think it's reasonable to assume that if the ingress controller knows how |
From @nicksardo on April 11, 2017 16:23 I agree that it's reasonable and would be an easy solution. However, I worry about unsuspecting users who upgrade their master and find their load balancer throwing 502s. Would creating an event on health check update be sufficient? Thoughts from other users? |
From @ConradIrwin on April 11, 2017 16:31 Is there any way that the "ingress controller"? can track whether the value was set automatically? |
From @nicksardo on April 11, 2017 16:56 The controller does not have persistent storage. One option would be to convert the health check settings to JSON and store the state in the HealthCheck's optional description field for tracking diffs, but this seems very ugly. |
From @nicksardo on April 11, 2017 16:56 cc @ibawt |
From @porridge on April 11, 2017 16:57 What do you mean by "creating an event on health check update"? |
From @nicksardo on April 11, 2017 17:10 If the controller sees a health check with settings different from default/readinessProbe, it updates the check then could create a K8s event for each parent ingress. If readiness probe exists, event could say, "Health Check for port 3XXXX updated to readinessProbe settings". If probe doesn't exist, "Health check for port 3XXXX updated to default settings (see ingress docs for defining settings via readinessProbe)". |
From @ConradIrwin on April 11, 2017 17:21 Just my 2c: As a kubernetes user, I have no idea how to use events — they It might just be a UI problem — if there was a way of just tailing the Sent via Superhuman https://sprh.mn/?vip=conrad.irwin@gmail.com On Tue, Apr 11, 2017 at 10:10 AM, Nick Sardonotifications@gitpro.ttaallkk.topwrote:
|
From @kilianc on May 14, 2017 21:36 Is there any workaround to make it pick up the changes, even manually? |
From @kilianc on May 14, 2017 21:50 To answer my own question and for everybody landing here with the same problem, rotate the name of the service in your rule or backend and the controller will pick up the path correctly. I assume this could be done with 0 downtime creating a new rule and deleting the old one. |
From @bviolier on June 8, 2017 7:26 We actually encountered an issue yesterday in regards to readinessProbes and health checks on GCE. A while ago we had some difficulties with setting up the health-check correctly through Kubernetes, adding that it doesn't pick up changes, we decided to manually update the health-check on the GCE side, everything worked. But, reading this issue, it should never update health-checks. I cannot re-trigger an update now, so it doesn't seem that this functionality has been added. But for sure the master upgrade did trigger this, is that expected behavior? |
From @bviolier on June 8, 2017 7:34 Another gotcha, when using the example provided, https://github.com/kubernetes/ingress/blob/6ed6475e87588494dd29ba5d33bac167ae8acfdd/controllers/gce/examples/health_checks/health_check_app.yaml, if you remove the All our reset health-checks didn't have a |
From @nicksardo on June 8, 2017 15:34 Regarding your last comment, Regarding your loss of manual health check settings, the latest version of the controller creates a newer-type of health check ( What settings are you trying to override with the readiness probe? Just the path? |
From @bviolier on June 8, 2017 15:52 @nicksardo Thanks for the information! I was not aware that the Makes sense if it created a new type of health-check, thanks for the explanation. I do think more people that manually edited their health checks will run into this issue, but, manually editing health checks is also not the best thing to do ;-) The most important changes we make with |
From @nicksardo on June 9, 2017 20:56 @bviolier |
I've been bitten by this issue more than any other issue regarding k8s and GKE. I think it happens due to incorrect abstraction level. Looking at this from 30000 ft, it seems wrong that the ingress controller looks through services into pods to see their probe configuration. What if there are different pods with different probe configurations, or even non-pod backends? Service-level health check should be defined on the service level. Ingress level configuration would work too but would be suboptimal (services can belong to multiple ingresses and it would be bad to duplicate). Furthermore, the ingress is actually checking whether the service is reachable via a node, not that a pod is alive - it might make sense to have totally separate health checks for these. Also, this would be useful service metadata for any service clients, not just ingress. |
Relevant issue: kubernetes/kubernetes#28924 |
Today when updating an ingress with a new host/service/deployment a GCP health check was created for It took a LONG while to figure out why... Bug or feature? =) |
Adding to the docs that we need to deploy the PODS and then rollout an ingress would help.. with a warning. Waste quite a bit of time on this one. |
Is this will be solved in kubernetes 1.9? Especially for GLBC 7 |
It seems when I use |
Agree with @axot is it an expected behaviour or a bug? |
2 cents as a user - I can appreciate the desire to not break things on upgrade, but the ingress not updating the health check is a real nuisance. Would annotations be a solution to this? We use them already for specifying a static IP for the load balancer (for example), so could extend that to other resources. For upgrade compatibility we could assign whatever backend/loadbalancer/healthcheck/etc is currently in use to said annotation. Then, in future, if the annotation isn't set then the ingress knows it has carte blanche to do whatever, but if it is then it knows it's been human-specified. This would also be future-proof if we decide to allow specifying an existing load balancer on creation, to balance between clusters. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I believe the path forwards here is to declare the health-check explicitly on the BackendConfig CRD: #42 (comment) |
Agreed that BackendConfig is the way to go here. I am going to close this bug given that we are most likely not going to support updates to readiness probes. We will happily accept community contributions for #42 :) /close |
@rramkumar1: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
From @ConradIrwin on April 11, 2017 4:22
Importing this issue from kubernetes/kubernetes#43773 as requested.
Kubernetes version (use
kubectl version
): 1.4.9Environment:
uname -a
):What happened:
I created an ingress with type GCE
This set up the health-check on the google cloud backend endpoint to call "/" as documented.
Unfortunately my service doesn't return 200 on
"/"
and so I added a readinessCheck to the pod as suggested by the documentation.What you expected to happen:
I expected the health check to be automatically updated.
Instead I had to delete the ingress and re-create it in order for the health-check to update.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Copied from original issue: kubernetes/ingress-nginx#582
The text was updated successfully, but these errors were encountered: