Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCP 3.3 release notes tracker #2507

Closed
adellape opened this issue Jul 15, 2016 · 6 comments
Closed

OCP 3.3 release notes tracker #2507

adellape opened this issue Jul 15, 2016 · 6 comments

Comments

@adellape
Copy link
Contributor

adellape commented Jul 15, 2016

Tracker for new features, enhancements, bug fixes to consider for inclusion in OCP 3.3 release notes.

@danwinship
Copy link
Contributor

danwinship commented Jul 25, 2016

In 3.2 and earlier, if you were using the redhat/openshift-ovs-multitenant network plugin, and you manually created a service Endpoint pointing to a pod or service owned by another tenant, then that Endpoint would be ignored. In 3.3, it is no longer possible for ordinary users to create such an endpoint (#2443, openshift/origin#9383) and so the plugin no longer bothers to filter them out (openshift/origin#9982). However, previously-created illegal endpoints might still exist; if so, the (old, pre-upgrade) logs will show warnings like:

Service 'foo' in namespace 'bob' has an Endpoint inside the service network (172.30.99.99)
Service 'foo' in namespace 'bob' has an Endpoint pointing to non-existent pod (10.130.0.8)
Service 'foo' in namespace 'bob' has an Endpoint pointing to pod 10.130.0.4 in namespace 'alice'

indicating the illegal Endpoints object. These log messages are the simplest way to find such illegal endpoints, but if you no longer have the pre-upgrade logs, you can try commands like the following to search for them:

# Find endpoints pointing to default ServiceNetworkCIDR (172.30.0.0/16)
oc get endpoints --all-namespaces --template '{{ range .items }}{{ .metadata.namespace }}:{{ .metadata.name }} {{ range .subsets }}{{ range .addresses }}{{ .ip }} {{ end }}{{ end }}{{ "\n" }}{{ end }}' | awk '/ 172\.30\./ { print $1 }'

# Find endpoints pointing to default ClusterNetworkCIDR (10.128.0.0/14)
for ep in $(oc get services --all-namespaces --template '{{ range .items}}{{ range .spec.selector }}{{ else }}{{ .metadata.namespace}}:{{ .metadata.name }} {{ end }}{{ end }}'); do
    oc get endpoints --namespace $(echo $ep | sed -e 's/:.*//') $(echo $ep | sed -e 's/.*://') --template '{{ .metadata.namespace }}:{{ .metadata.name }} {{ range .subsets }}{{ range .addresses }}{{ .ip }} {{ end }}{{ end }}{{ "\n" }}' | awk '/ 10\.(12[8-9]|1[3-9][0-9]|2[0-5][0-9])\./ { print $1 }'
done

@liggitt
Copy link

liggitt commented Aug 2, 2016

openshift/origin#10109

When tagging images across namespaces (e.g. oc tag ns1/image-stream-a:tag-a ns2/image-stream-b:tag-b), a user must have pull permission on the source image stream. This means they need get access on the imagestreams/layers resource in the source namespace. The admin, edit, and system:image-puller roles all grant this permission.

@smarterclayton
Copy link
Contributor

openshift/origin#9972

OpenShift 1.3 / 3.3 has altered the DNS records returned by SRV requests for services to be compatible with Kubernetes 1.3 to support PetSets. The primary change is that SRV records for a name no longer enumerate the list of all available ports - instead, if you want to find a port named http over protocol tcp, you must specifically ask for that SRV record.

  1. The SRV records returned for the service name (SVC.NAMESPACE.svc.cluster.local) have changed.

    Previously, we would return one SRV record per service port, but to be compatible with Kube we now return SRV records representing endpoints (ENDPOINT.SVC.NAMESPACE.svc.cluster.local) without port info (a port of 0).

    A clustered service (type ClusterIP) will have one record pointing to a generated name (e.g. 340982409.SVC.NAMESPACE.svc.cluster.local) and an associated A record pointing to the cluster IP.

    A headless service (with clusterIP=None) returns one record per address field in the Endpoints record (typically one per pod). The endpoint name is either the hostname field in the endpoint (read from an annotation on the pod) or a hash of the endpoint address, and has an associated A record pointing to the address matching that name.

  2. The SRV records returned for an endpoint name (ENDPOINT.SVC.NAMESPACE.svc.cluster.local) have changed - a single SRV record is returned if the endpoint exists (the name matches the generated endpoint name described above) or no record if the endpoint does not exist.

  3. The SRV records for a given port - _PORTNAME._PROTOCOL.SVC.NAMESPACE.svc.cluster.local - behave as they did before, returning port info.

@smarterclayton
Copy link
Contributor

smarterclayton commented Aug 2, 2016

From openshift/openshift-ansible#2227

1.3/3.3 will add init containers, which have security implications if the user has precreated pods with init containers their policy does not allow. It is possible for a user to precreate pods prior to an upgrade with the annotation that includes privileged init containers, allowing a user to escape their security policy.

During upgrade, admins must do the following to address this vulnerability:

  1. Upgrade their masters (api)

  2. Run this script to delete all pods that use init containers:

    oc get pods --template '{{ range .items }}{{ if ne (index .metadata "annotations" "pod.alpha.kubernetes.io/init-containers" | len) 0 }}pods/{{ .metadata.name }} -n {{ .metadata.namespace }}{{ "\n" }}{{ end }}{{ end }}' | xargs oc delete
    
  3. Upgrade the rest of the cluster

This will ensure that all pods with init containers created before policy was enforced are made to go through the security mechanisms again.

@rajatchopra
Copy link

https://bugzilla.redhat.com/show_bug.cgi?id=1371826

We broke the backward compatibility of the routing template structures. We have a document explaining what happened, how one could be affected and what to do when upgrading from 3.2. to 3.3

Can we have this doc (https://github.com/rajatchopra/routing_data_structure_changes/blob/master/README.md) be put into the docs in a suitable place? Please guide me on the correct place for it and I will put up the PR for openshift-docs 3.3
attn: @adellape
cc: @eparis @danmcp

@adellape
Copy link
Contributor Author

https://bugzilla.redhat.com/show_bug.cgi?id=1371826

We broke the backward compatibility of the routing template structures. We have a document explaining what happened, how one could be affected and what to do when upgrading from 3.2. to 3.3

^ Was handled via #2935.

@adellape adellape modified the milestones: OCP 3.3 GA, Future Release Oct 19, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants