Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add instructions for 3.2 -> 3.3 upgrade for security #2227

Closed
smarterclayton opened this issue Jul 27, 2016 · 14 comments
Closed

Add instructions for 3.2 -> 3.3 upgrade for security #2227

smarterclayton opened this issue Jul 27, 2016 · 14 comments
Assignees
Milestone

Comments

@smarterclayton
Copy link
Contributor

1.3/3.3 will add init containers, which have security implications if the user has precreated pods with init containers their policy does not allow. During upgrade, admins must do the following if they are concerned about this vulnerability:

  1. Upgrade their masters (api)

  2. Run this script to delete all pods that use init containers:

    oc get pods --template '{{ range .items }}{{ if ne (index .metadata "annotations" "pod.alpha.kubernetes.io/init-containers" | len) 0 }}pods/{{ .metadata.name }} -n {{ .metadata.namespace }}{{ "\n" }}{{ end }}{{ end }}' | xargs oc delete
    
  3. Upgrade the rest of the cluster

We could script this but I think this should be manual.

@smarterclayton smarterclayton added this to the 1.3/3.3 milestone Jul 27, 2016
@smarterclayton
Copy link
Contributor Author

@sdodson @deads2k @detiber related to security of new features enabled in 1.2 -> 1.3 that could risk something. I'm ok with this being a manual step in docs if necessary.

@liggitt
Copy link
Contributor

liggitt commented Jul 27, 2016

I'm ok with this being a manual step in docs if necessary.

I'm not, given the implications if it isn't done.

@smarterclayton
Copy link
Contributor Author

By manual step I mean you have to explicitly go past this step yourself.

@liggitt
Copy link
Contributor

liggitt commented Jul 29, 2016

it looks like seccomp profiles are also implemented using annotations now? do we need to check those in SCC, and sweep for bad annotations in existing data for that as well?

@sdodson
Copy link
Member

sdodson commented Jul 29, 2016

Should this also be noted in https://github.com/openshift/origin/blob/master/UPGRADE.md ?

@smarterclayton
Copy link
Contributor Author

For seccomp, yes.

@sdodson
Copy link
Member

sdodson commented Aug 16, 2016

@liggitt and @smarterclayton final decision on automating this versus manual step?

@sdodson
Copy link
Member

sdodson commented Aug 16, 2016

So we're saying that we should stop the install after upgrading the masters, have them copy/paste something, then restart the upgrade?

@dgoodwin
Copy link
Contributor

Upgrade is not currently a split process, we handle all masters and nodes in one invocation, and exiting after masters to prompt to do this, then resuming with the nodes, would be a bit too much refactoring to consider at this point and an unpleasant user experience we should try to avoid if possible.

If upgrade involves an evacuation of every node, do we have to actually delete these pods or could we just let the evac recreate them? @smarterclayton @liggitt

If it has to be done, we should probably document that it WILL be done during your upgrade and just do it automatically. I will start on this path and add the command above after masters are all updated, please let me know if anyone objects.

@dgoodwin
Copy link
Contributor

Also could anyone provide a simple example of an init pod that should get cleaned up by this for testing?

We can also add an inventory variable to disable this step, and document that for admins who want to skip it. If they don't read the docs, the init pods get cleaned by default.

@dgoodwin
Copy link
Contributor

Seeing some weird behavior here, I found an example for creating an init container, but when I do an upgrade (without running my change) it's still gone. Is something else cleaning these up internally?

@dgoodwin
Copy link
Contributor

OK I discovered this appears to be because it's just a bare pod, and if the node it's running on is evacuated the pod is deleted as there's nothing to recreate it without a replication controller or replica set.

Bearing in mind that during upgrade we first upgrade all the masters, then serially upgrade all the nodes, I believe all pods will be deleted, so based on this I don't think there's any upgrade step required here. @smarterclayton can you confirm?

@dgoodwin
Copy link
Contributor

Spoke with Clayton on IRC. Because we assume to evac all nodes today, this step is not needed for 3.3 upgrade, and it shouldn't be run for future 3.4+ upgrades, so there is no point to integrating this into the upgrade playbooks today.

Clayton raised concerns that node evac is mandatory, I've added a note to 3.4 trello card to investigate this as an improvement for upgrade, we're probably getting close to a point where we might now have to assume this. (3.3 is probably the first release that doesn't require a Docker config change and might not require a Docker upgrade, depending on whether or not they went to 1.10 while running OpenShift 3.2 or not)

I will either file an issue or update the manual upgrade steps in docs to outline this step.

@JoaoJoia
Copy link

There is someone that can help me with the upgrade of OpenShift Origin cluster from 1.2 to 1.3. the v3_4 upgrade?
Someone can guide me?
I want to get up to date to the new OpenShift Origin cluster version.

Cumps, João

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants