Deploy and configure the Red Hat OpenShift Container Platform on Google Cloud
This section describes a high-level workflow of how to set up and manage OpenShift Clusters in GCP and deploy stateful applications on them. It shows the use of Google Cloud NetApp Volumes and NetApp Cloud Volumes ONTAP storage with the help of Trident to provide persistent volumes.
Here is a diagram that shows the clusters deployed on GCP and connected to the data center using a VPN.
There are several ways of deploying Red Hat OpenShift Container platform clusters in GCP. This high-level description of the setup provides documentation links for the specific method that was used. You can refer to the other methods in the relevant links provided in the resources section. |
The setup process can be broken down into the following steps:
-
Ensure that you have met all the prerequisites stated here.
-
For the VPN connectivity between on-premises and GCP, a pfsense VM was created and configured. For instructions, see here.
-
The remote gateway address in pfsense can be configured only after you have created a VPN gateway in Google Cloud Platform.
-
The remote network IP addresses for the Phase 2 can be configured only after the OpenShift cluster installation program runs and creates the infrastructure components for the cluster.
-
The VPN in Google Cloud can only be configured after the infrastructure components for the cluster are created by the installation program.
-
-
Now install the OpenShift cluster on GCP.
-
Obtain the installation program and the pull secret and deploy the cluster following the steps provided in the documentation here.
-
The installation creates a VPC network in Google Cloud Platform. It also creates a private zone in Cloud DNS and adds A records.
-
Use the CIDR block address of the VPC network to configure the pfsense and establish the VPN connection. Ensure firewalls are setup correctly.
-
Add A records in the DNS of the on-premises environment using the IP address in the A records of the Google Cloud DNS.
-
-
The installation of the cluster completes and will provide a kubeconfig file and username and password to login to the console of the cluster.
-
-
Google Cloud NetApp Volumes can be added to your projects as outlined here.
-
Install a connector in Google Cloud. Refer to instructions here.
-
Deploy a CVO instance in Google Cloud using the connector. Refer to instructions here. https://docs.netapp.com/us-en/bluexp-cloud-volumes-ontap/task-getting-started-gcp.html
Cloud providers, today, enable Kubernetes/OpenShift cluster administrators to spawn nodes of the clusters that are zone based. Nodes can be located in different availability zones within a region, or across various regions. To facilitate the provisioning of volumes for workloads in a multi-zone architecture, Trident uses CSI Topology. Using the CSI Topology feature, access to volumes can be limited to a subset of nodes, based on regions and availability zones. Refer here for additional details.
Kubernetes supports two volume binding modes: - When VolumeBindingMode is set to Immediate (default), Trident creates the volume without any topology awareness. Persistent Volumes are created without having any dependency on the requesting pod’s scheduling requirements. - When VolumeBindingMode set to WaitForFirstConsumer, the creation and binding of a Persistent Volume for a PVC is delayed until a pod that uses the PVC is scheduled and created. This way, volumes are created to meet the scheduling constraints that are enforced by topology requirements. Trident storage backends can be designed to selectively provision volumes based on availability zones (Topology-aware backend). For StorageClasses that make use of such a backend, a volume would only be created if requested by an application that is scheduled in a supported region/zone. (Topology-aware StorageClass) Refer here for additional details. |
Demonstration Video