You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #46 we're implementing CA for generating certificates for the etcd-proxy. In the API Server namespace, we have ConfigMap and Secret with serving CA bundle and client certificate and key. The API Server uses those certificates to communicate with etcd-proxy.
Currently, the certificates regeneration is not implemented, but we're planning it as soon as #46 is merged.
When Secret with client certificate and key is updated, the certificate files in pods are updated by default (ensured by kubelet). But, to put changes in effect, the API Server has to be restarted. We want to find the best way to restart the API Server, as well as to minimize the downtime when doing that.
The first considered solution is to just delete pods and to let deployment to recreate them. However, this solution brings many problems:
As mentioned above, downtime. In this case downtime can be higher and we want to try to avoid.
If API Server is not deployed using Deployment or ReplicaSet, but actually deployed as a single pod, the pod is not going to be recreated. While this is not advised, this case can happen. As this is rare case, we could require deploying with ReplicaSet or Deployment, but I want to try to avoid enforcing deployment method.
Implementation can be hard. We need to tell the EtcdProxyController which deployment to restart and this could be tricky. RBAC permissions is also a problem, as we need to ensure the controller ServiceAccount can get and delete pods, as well as list deployments.
@sttts proposed a solution to implement sort of graceful restart. We can add watcher to watch when certs are changed and then to cut all gRPC connections and just replace etcd certificates.
That logic would be added to k8s.io/apiserver, guarded by a flag, let's name it --restart-on-certs-change.
This change will happen in at least two stages:
Making API Server pick new etcd certs.
Making API Server pick new API Server certs. API Server certs handling is still not implemented in EtcdProxyController, but it is planned for the ongoing milestone.
This depends on certificates regeneration. See #50 for more details.
Possibly, we may have to create this issue in kubernetes/kubernetes as well.
The text was updated successfully, but these errors were encountered:
In #46 we're implementing CA for generating certificates for the etcd-proxy. In the API Server namespace, we have ConfigMap and Secret with serving CA bundle and client certificate and key. The API Server uses those certificates to communicate with etcd-proxy.
Currently, the certificates regeneration is not implemented, but we're planning it as soon as #46 is merged.
When Secret with client certificate and key is updated, the certificate files in pods are updated by default (ensured by kubelet). But, to put changes in effect, the API Server has to be restarted. We want to find the best way to restart the API Server, as well as to minimize the downtime when doing that.
The first considered solution is to just delete pods and to let deployment to recreate them. However, this solution brings many problems:
@sttts proposed a solution to implement sort of graceful restart. We can add watcher to watch when certs are changed and then to cut all gRPC connections and just replace etcd certificates.
That logic would be added to
k8s.io/apiserver
, guarded by a flag, let's name it--restart-on-certs-change
.This change will happen in at least two stages:
This depends on certificates regeneration. See #50 for more details.
Possibly, we may have to create this issue in kubernetes/kubernetes as well.
The text was updated successfully, but these errors were encountered: