-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add instructions for 3.2 -> 3.3 upgrade for security #2227
Comments
I'm not, given the implications if it isn't done. |
By manual step I mean you have to explicitly go past this step yourself. |
it looks like seccomp profiles are also implemented using annotations now? do we need to check those in SCC, and sweep for bad annotations in existing data for that as well? |
Should this also be noted in https://github.com/openshift/origin/blob/master/UPGRADE.md ? |
For seccomp, yes. |
@liggitt and @smarterclayton final decision on automating this versus manual step? |
So we're saying that we should stop the install after upgrading the masters, have them copy/paste something, then restart the upgrade? |
Upgrade is not currently a split process, we handle all masters and nodes in one invocation, and exiting after masters to prompt to do this, then resuming with the nodes, would be a bit too much refactoring to consider at this point and an unpleasant user experience we should try to avoid if possible. If upgrade involves an evacuation of every node, do we have to actually delete these pods or could we just let the evac recreate them? @smarterclayton @liggitt If it has to be done, we should probably document that it WILL be done during your upgrade and just do it automatically. I will start on this path and add the command above after masters are all updated, please let me know if anyone objects. |
Also could anyone provide a simple example of an init pod that should get cleaned up by this for testing? We can also add an inventory variable to disable this step, and document that for admins who want to skip it. If they don't read the docs, the init pods get cleaned by default. |
Seeing some weird behavior here, I found an example for creating an init container, but when I do an upgrade (without running my change) it's still gone. Is something else cleaning these up internally? |
OK I discovered this appears to be because it's just a bare pod, and if the node it's running on is evacuated the pod is deleted as there's nothing to recreate it without a replication controller or replica set. Bearing in mind that during upgrade we first upgrade all the masters, then serially upgrade all the nodes, I believe all pods will be deleted, so based on this I don't think there's any upgrade step required here. @smarterclayton can you confirm? |
Spoke with Clayton on IRC. Because we assume to evac all nodes today, this step is not needed for 3.3 upgrade, and it shouldn't be run for future 3.4+ upgrades, so there is no point to integrating this into the upgrade playbooks today. Clayton raised concerns that node evac is mandatory, I've added a note to 3.4 trello card to investigate this as an improvement for upgrade, we're probably getting close to a point where we might now have to assume this. (3.3 is probably the first release that doesn't require a Docker config change and might not require a Docker upgrade, depending on whether or not they went to 1.10 while running OpenShift 3.2 or not) I will either file an issue or update the manual upgrade steps in docs to outline this step. |
There is someone that can help me with the upgrade of OpenShift Origin cluster from 1.2 to 1.3. the v3_4 upgrade? Cumps, João |
1.3/3.3 will add init containers, which have security implications if the user has precreated pods with init containers their policy does not allow. During upgrade, admins must do the following if they are concerned about this vulnerability:
Upgrade their masters (api)
Run this script to delete all pods that use init containers:
Upgrade the rest of the cluster
We could script this but I think this should be manual.
The text was updated successfully, but these errors were encountered: