Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TELCODOCS-368: Deploying an OCP cluster manually without Assisted Installer (regardless of use case) #38068

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -209,13 +209,13 @@ Topics:
File: installing-bare-metal-network-customizations
- Name: Installing a user-provisioned bare metal cluster on a restricted network
File: installing-restricted-networks-bare-metal
- Name: Installing on a single node
- Name: Installing Single Node OpenShift
Dir: installing_sno
Distros: openshift-origin,openshift-enterprise
Topics:
- Name: Preparing to install on a single node
- Name: Preparing to install SNO
File: install-sno-preparing-to-install-sno
- Name: Installing on a single node
- Name: Installing SNO
File: install-sno-installing-sno
- Name: Deploying installer-provisioned clusters on bare metal
Dir: installing_bare_metal_ipi
Expand Down
12 changes: 9 additions & 3 deletions installing/installing_sno/install-sno-installing-sno.adoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,16 @@
[id="installing-sno"]
= Installing on a single node
:context: install-sno-installing-sno
= Installing SNO
:context: install-sno-installing-sno-with-the-assisted-installer
include::modules/common-attributes.adoc[]

toc::[]

include::modules/install-sno-generate-the-discovery-iso.adoc[leveloffset=+1]
include::modules/install-sno-generate-the-discovery-iso-with-the-assisted-installer.adoc[leveloffset=+1]

include::modules/install-sno-generate-the-discovery-iso-manually.adoc[leveloffset=+1]

include::modules/install-sno-installing-with-usb-media.adoc[leveloffset=+1]

include::modules/install-sno-monitoring-the-installation-with-the-assisted-installer.adoc[leveloffset=+1]

include::modules/install-sno-monitoring-the-installation-manually.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@ include::modules/common-attributes.adoc[]

toc::[]

include::modules/install-sno-about-installing-on-a-single-node.adoc[leveloffset=+1]
include::modules/install-sno-about-installing-single-node-openshift.adoc[leveloffset=+1]

include::modules/install-sno-requirements-for-installing-on-a-single-node.adoc[leveloffset=+1]
10 changes: 0 additions & 10 deletions modules/install-sno-about-installing-on-a-single-node.adoc

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
// This is included in the following assemblies:
//
// installing_sno/install-sno-preparing-to-install-sno.adoc

[id="install-sno-about-installing-single-node-openshift_{context}"]
= About installing Single Node OpenShift

You can create a single node cluster with standard installation methods. Single Node OpenShift (SNO) is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability.
152 changes: 152 additions & 0 deletions modules/install-sno-generate-the-discovery-iso-manually.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
// This is included in the following assemblies:
//
// installing_sno/install-sno-installing-sno.adoc

[id="generate-the-discovery-iso-manually_{context}"]
= Generate the discovery ISO manually

Installing {product-title} on a single node requires a discovery ISO, which you can generate with the following procedure.

.Procedure

. Download the {product-title} client (`oc`) and make it available for use by entering the following command:
+
[source,terminal]
----
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz > oc.tar.gz
----
+
[source,terminal]
----
$ tar zxf oc.tar.gz
----
+
[source,terminal]
----
$ chmod +x oc
----

. Set the {product-title} version:
+
[source,terminal]
----
$ OCP_VERSION=<ocp_version> <1>
----
+
<1> Replace `<ocp_version>` with the current version. For example. `latest-{product-title}`

. Download the {product-title} installer and make it available for use:
+
[source,terminal]
----
$ curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$OCP_VERSION/openshift-install-linux.tar.gz
----
+
[source,terminal]
----
$ tar zxvf openshift-install-linux.tar.gz
----
+
[source,terminal]
----
$ chmod +x openshift-install
----

. Retrieve the {op-system-first} ISO:
johnwilkins marked this conversation as resolved.
Show resolved Hide resolved
+
[source,terminal]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have guidance restricting our use of jq commands: "Do not use jq in commands (unless it is truly required), because this requires users to install the jq tool. Oftentimes, the same or similar result can be accomplished using jsonpath for oc commands."

Is there an alternate command that would work here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left it in for now. Need to hear from QE and engineering.

----
$ ISO_URL=$(openshift-install coreos print-stream-json | \
johnwilkins marked this conversation as resolved.
Show resolved Hide resolved
jq .architectures.x86_64.artifacts.metal.formats.iso.disk.location)
----
+
[source,terminal]
----
$ curl $ISO_URL > rhcos.iso
----

. Prepare the `install-config.yaml` file:
+
[source,yaml]
----
apiVersion: v1
baseDomain: <domain> <1>
compute:
- name: worker
replicas: 0 <2>
controlPlane:
name: master
replicas: 1 <3>
metadata:
name: <name> <4>
networking:
networkType: OVNKubernetes
clusterNetwork:
- cidr: <IP_address>/<prefix> <5>
hostPrefix: <prefix> <6>
serviceNetwork:
- <IP_address>/<prefix> <7>
platform:
none: {}
BootstrapInPlace:
InstallationDisk: <path_to_install_drive> <8>
pullSecret: '<pull_secret>' <9>
sshKey: |
<ssh_key> <10>
----
+
<1> Add the cluster domain name.
+
johnwilkins marked this conversation as resolved.
Show resolved Hide resolved
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
+
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
+
<4> Set the `metadata` name to the cluster name.
+
<5> Set the `clusterNetwork` CIDR.
+
<6> Set the `clusterNetwork` host prefix. Pods receive their IP addresses from this pool.
+
<7> Set the `serviceNetwork` CIDR. Services receive their IP addresses from this pool.
+
<8> Set the path to the installation disk drive.
+
<9> Copy the pull secret from link:https://console.redhat.com/openshift/install/pull-secret[Red Hat Hybrid Cloud Console]. In Step 1, click *Download pull secret* and add the contents to this configuration setting.
+
<10> Add the public `ssh` key from the administration host so that you can log in to the cluster after installation.

. Generate {product-title} assets:
+
[source,terminal]
----
$ mkdir ocp
----
+
[source,terminal]
----
$ cp install-config.yaml ocp
----
+
[source,terminal]
----
$ openshift-install --dir=ocp create single-node-ignition-config
----

. Embed the ignition data into the {op-system} ISO:
+
[source,terminal]
----
$ alias coreos-installer='podman run --privileged --rm \
-v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data \
-w /data quay.io/coreos/coreos-installer:release'
----
+
[source,terminal]
----
$ cp ocp/bootstrap-in-place-for-live-iso.ign iso.ign
----
+
[source,terminal]
----
$ coreos-installer iso ignition embed -fi iso.ign rhcos-live.x86_64.iso
----
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@
//
// installing_sno/install-sno-installing-sno.adoc

[id="generate-the-discovery-iso_{context}"]
[id="generate-the-discovery-iso-with-the-assisted-installer_{context}"]
= Generate the discovery ISO with the Assisted Installer

= Generate the discovery ISO

Installing {product-title} on a single node requires a discovery ISO, which the Assisted Installer (AI) generates with the cluster name, base domain, Secure Shell (SSH) public key, and pull secret.
Installing {product-title} on a single node requires a discovery ISO, which the Assisted Installer (AI) can generate with the cluster name, base domain, Secure Shell (SSH) public key, and pull secret.

.Procedure

Expand Down
16 changes: 0 additions & 16 deletions modules/install-sno-installing-with-usb-media.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,19 +34,3 @@ After the ISO is copied to the USB drive, you can use the USB drive to install {
. Change boot drive order to make the USB drive boot first.

. Save and exit the BIOS settings. The server will boot with the discovery image.

. On the administration node, return to the browser and refresh the page. If necessary, reload the link:https://console.redhat.com/openshift/assisted-installer/clusters[Install OpenShift with the Assisted Installer] page and select the cluster name.

. Click *Next* until you reach step 3.

. Select a subnet from the available subnets.

. Keep *Use the same host discovery SSH key* checked. You can change the SSH public key, if necessary.

. Click *Next* to step 4.

. Click *Install cluster*.

. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the discovery image to the server's drive, the server will reboot. Remove the USB drive and reset the BIOS to boot to the server's local media rather than the USB drive.

The server will reboot several times, deploying a control plane followed by a worker.
36 changes: 36 additions & 0 deletions modules/install-sno-monitoring-the-installation-manually.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
// This is included in the following assemblies:
//
// installing_sno/install-sno-installing-sno.adoc

[id="monitoring-the-installation-manually_{context}"]
= Monitoring the installation manually

If you created the ISO manually, use this procedure to monitor the installation.

.Procedure

. Monitor the installation:
+
[source,terminal]
----
$ openshift-install --dir=ocp wait-for install-complete
----
+
The server will reboot several times while deploying the control plane.

. Optional: After installation completes, check the environment:
+
[source,terminal]
----
$ export KUBECONFIG=ocp/auth/kubeconfig
----
+
[source,terminal]
----
$ oc get node
----
+
[source,terminal]
----
$ oc get clusterversion
----
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
// This is included in the following assemblies:
//
// installing_sno/install-sno-installing-sno.adoc

[id="monitoring-the-installation-with-the-assisted-installer_{context}"]
= Monitoring the installation with the Assisted Installer

If you created the ISO using the Assisted Installer, use this procedure to monitor the installation.

.Procedure

. On the administration host, return to the browser and refresh the page. If necessary, reload the link:https://console.redhat.com/openshift/assisted-installer/clusters[Install OpenShift with the Assisted Installer] page and select the cluster name.

. Click *Next* until you reach step 3, "Networking."

. Select a subnet from the available subnets.

. Keep *Use the same host discovery SSH key* checked. You can change the SSH public key, if necessary.

. Click *Next* to *Review and Create*.

. Click *Install cluster*.

. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the discovery image to the server's drive, the server will reboot. Remove the USB drive and reset the BIOS to boot to the server's local media rather than the USB drive.

The server will reboot several times, deploying the control plane.
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@

Installing {product-title} on a single node alleviates some of the requirements of high availability and large scale clusters. However, you must address the following requirements:

* *Administration node:* You must have an administration node or laptop to access link:https://console.redhat.com/openshift/assisted-installer/clusters[Install OpenShift with the Assisted Installer], to specify the cluster name, to create the USB boot drive, and to monitor the installation.
* *Administration host:* You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation.

* *Production-grade server:* Installing {product-title} on a single node requires a server with sufficient resources to run {product-title} services and a production workload.
+
.Hardware requirements
.Minimum resource requirements
[options="header"]
|====
|Profile|vCPU|Memory|Storage
Expand All @@ -34,6 +34,7 @@ The server must have a Baseboard Management Controller (BMC) when booting with v
|====
|Usage|FQDN|Description
|Kubernetes API|`api.<cluster_name>.<base_domain>`| Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster.
|Internal API|`api-int.<cluster_name>.<base_domain>`| Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster.
|Ingress route|`*.apps.<cluster_name>.<base_domain>`| Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster.
|Cluster node|`<hostname>.<cluster_name>.<base_domain>`| Add a DNS A/AAAA or CNAME record and DNS PTR record to identify the node.
|====
Expand Down