-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Control plane access review requests fail when proxy is unavailable #2407
Labels
Comments
As @siggy and I discussed, one potential quick-and-dirty (maybe temporary) solution to this is to simply add 443 to the set of outbound skip ports. We don't get any real value from forwarding the TLS stream through the proxy, and it introduces a set of annoying operational issues that are difficult to fully eliminate (until kubernetes offers better ordering primitives). |
siggy
added a commit
that referenced
this issue
Feb 27, 2019
#2349 introduced a `SelfSubjectAccessReview` check at startup, to determine whether each control-plane component should establish Kubernetes watches cluster-wide or namespace-wide. If this check occurs before the linkerd-proxy sidecar is ready, it fails, and the control-plane component restarts. This change configures each control-plane pod to skip outbound port 443 when injecting the proxy, allowing the control-plane to connect to Kubernetes regardless of the `linkerd-proxy` state. A longer-term fix should involve a more robust control-plane startup, that is resilient to failed Kubernetes API requests. An even longer-term fix could involve injecting `linkerd-proxy` as a Kubernetes "sidecar" container, when that becomes available. Workaround for #2407 Signed-off-by: Andrew Seigner <siggy@buoyant.io>
siggy
added a commit
that referenced
this issue
Feb 27, 2019
#2349 introduced a `SelfSubjectAccessReview` check at startup, to determine whether each control-plane component should establish Kubernetes watches cluster-wide or namespace-wide. If this check occurs before the linkerd-proxy sidecar is ready, it fails, and the control-plane component restarts. This change configures each control-plane pod to skip outbound port 443 when injecting the proxy, allowing the control-plane to connect to Kubernetes regardless of the `linkerd-proxy` state. A longer-term fix should involve a more robust control-plane startup, that is resilient to failed Kubernetes API requests. An even longer-term fix could involve injecting `linkerd-proxy` as a Kubernetes "sidecar" container, when that becomes available. Workaround for #2407 Signed-off-by: Andrew Seigner <siggy@buoyant.io>
This was fixed by #2411 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
When installing the control plane from latest master, I see that some components have restarted:
Looking at the logs for a container that restarted I see this printed immediately prior to exiting:
This appears to be a result of the RBAC checks that were added in #2349. We need to add code that retries those requests until they are successfully returned, rather than exiting, to account for the fact that the proxy won't be immediately available on container startup.
The text was updated successfully, but these errors were encountered: