Architecture

Contributors ac-ntap netapp-dorianh Download PDF of this page

Although Red Hat OpenShift and NetApp Trident backed by NetApp ONTAP do not provide isolation between workloads by default, they offer a wide range of features that can be used to configure multitenancy. To better understand designing a multi-tenant solution on Red Hat OpenShift cluster with NetApp Trident backed by NetApp ONTAP, let us consider an example with a set of requirements and outline the configuration around it.

Let us assume that an organization runs two of its workloads on a Red Hat OpenShift cluster as part of two projects that two different teams are working on. The data for these workloads reside on PVCs that are dynamically provisioned by NetApp Trident on NetApp ONTAP NAS backend. The organization has a requirement to design a multi-tenant solution for these two workloads and isolate the resources used for these projects to ensure security and performance is upheld, mainly focused on the data that serves those applications.

The following figure depicts the multi-tenant solution on Red Hat OpenShift cluster with NetApp Trident backed by NetApp ONTAP.

Multi-tenancy on Red Hat OpenShift cluster with NetApp Trident backed by NetApp ONTAP

Technology requirements

  1. NetApp ONTAP storage cluster

  2. Red Hat OpenShift cluster

  3. NetApp Trident

Red Hat OpenShift – Cluster resources

From Red Hat OpenShift cluster point of view, the top-level resource to start with is project. OpenShift Project can be viewed as a cluster resource that divides the whole OpenShift cluster into multiple virtual clusters. Hence isolation at project level provides a base for configuring multi-tenancy.

Next up is to configure RBAC in the cluster. The best practice is to have all the developers working on a single project/workload configured into a single user group in the Identity provider (IdP). Red Hat OpenShift allows IdP integration and user group synchronization thus allowing the users and groups from the IdP to be imported into the cluster. This helps the cluster administrators to segregate access of the cluster resources dedicated to a project to user group/s working on that project, hence restricting unauthorized access to any cluster resources. To learn more about IdP integration with Red Hat OpenShift, refer the documentation here.

NetApp ONTAP

It is important to isolate the shared storage serving as persistent storage provider for Red Hat OpenShift cluster to ensure the volumes created on the storage for each project appear to the hosts as if they are created on separate storage. To do this, create as many SVMs (storage virtual machines) on NetApp ONTAP as there are projects/workloads and dedicate each SVM to a workload.

NetApp Trident

Once we have different SVMs for different projects created on NetApp ONTAP, we need to map each SVM to a different Trident backend.
The backend configuration on Trident drives the allocation of persistent storage to OpenShift cluster resources and it requires the details of the SVM to be mapped to, protocol driver for the backend at the minimum. Optionally, it allows us to define how the volumes are provisioned on the storage and to set limits for the size of volumes or usage of aggregates etc. Details of defining the Trident backend for NetApp ONTAP can be found here.

Red Hat OpenShift – storage resources

After configuring the Trident backends, next step is to configure StorageClasses. Configure as many storage classes as there are backends, providing each storage class access to spin up volumes only on one backend. We can map the StorageClass to a particular Trident backend by using storagePools parameter while defining the storage class. The details to define a storage class can be found here. Thus, there will be one-to-one mapping from StorageClass to Trident backend which points back to one SVM. This ensures that all storage claims via the StorageClass assigned to that project will be served by the SVM dedicated to that project only.

But since storage classes are not namespaced resources, how do we ensure that storage claims to storage class of one project by pods in another namespace/project gets rejected? The answer is to use ResourceQuotas. ResourceQuotas are objects that control the total usage of resources per project. It can limit the number as well as the total amount of resources that can be consumed by objects in the project. Almost all the resources of a project can be limited using ResourceQuotas and using this efficiently can help organizations cut cost and outages due to overprovisioning or overconsumption of resources. Refer to the documentation here for more information.

For this use-case, we need to limit the pods in a particular project from claiming storage from storage classes that are not dedicated to their project. To do that, we will need to limit the persistent volume claims for other storage classes by setting <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims to 0. In addition, a cluster administrator must ensure that the developers in a project should not have access to modify the ResourceQuotas.