Workflows: Red Hat OpenShift Virtualization with NetApp ONTAP
This section covers the how to migrate a virtual machine between from VMware to an OpenShift Cluster using Red Hat OpenShift Virtualization migration toolkit.
Migration of VM from VMware to OpenShift Virtualization using Migration Toolkit for Virtualization
In this section, we will see how to use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from VMware to OpenShift Virtualization running on OpenShift Container platform and integrated with NetApp ONTAP storage using Trident.
The following video shows a demonstration of the migration of a RHEL VM from VMware to OpenShift Virtualization using ontap-san storage class for persistent storage.
The following diagram shows a high level view of the migration of a VM from VMware to Red Hat OpenShift Virtualization.
Prerequisites for the sample migration
On VMware
-
A RHEL 9 VM using rhel 9.3 with the following configurations were installed:
-
CPU: 2, Memory: 20 GB, Hard disk: 20 GB
-
user credentials: root user and an admin user credentials
-
-
After the VM was ready, postgresql server was installed.
-
postgresql server was started and enabled to start on boot
systemctl start postgresql.service` systemctl enable postgresql.service The above command ensures that the server can start in the VM in OpenShift Virtualization after migration
-
Added 2 databases, 1 table and 1 row in the table were added. Refer here for the instructions for installing postgresql server on RHEL and creating database and table entries.
-
Ensure that you start the postgresql server and enable the service to start at boot. |
On OpenShift Cluster
The following installations were completed before installing MTV:
-
OpenShift Cluster 4.13.34
-
Multipath on the cluster nodes enabled for iSCSI (for ontap-san storage class). See the provided yaml to create a daemon set that enables iSCSI on each node in the cluster.
-
Trident backend and Storage class for ontap SAN using iSCSI. See the provided yaml files for trident backend and storage class.
To install iscsi and multipath on the OpenShift Cluster nodes use the yaml file given below
Preparing the cluster nodes for iSCSI
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: trident
name: trident-iscsi-init
labels:
name: trident-iscsi-init
spec:
selector:
matchLabels:
name: trident-iscsi-init
template:
metadata:
labels:
name: trident-iscsi-init
spec:
hostNetwork: true
serviceAccount: trident-node-linux
initContainers:
- name: init-node
command:
- nsenter
- --mount=/proc/1/ns/mnt
- --
- sh
- -c
args: ["$(STARTUP_SCRIPT)"]
image: alpine:3.7
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
sudo yum install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
rpm -q iscsi-initiator-utils
sudo sed -i 's/^\(node.session.scan\).*/\1 = manual/' /etc/iscsi/iscsid.conf
cat /etc/iscsi/initiatorname.iscsi
sudo mpathconf --enable --with_multipathd y --find_multipaths n
sudo systemctl enable --now iscsid multipathd
sudo systemctl enable --now iscsi
securityContext:
privileged: true
hostPID: true
containers:
- name: wait
image: k8s.gcr.io/pause:3.1
hostPID: true
hostNetwork: true
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
updateStrategy:
type: RollingUpdate
Use the following yaml file to create trident backend configuration for using ontap san storage
Trident backend for iSCSI
apiVersion: v1
kind: Secret
metadata:
name: backend-tbc-ontap-san-secret
type: Opaque
stringData:
username: <username>
password: <password>
---
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: ontap-san
spec:
version: 1
storageDriverName: ontap-san
managementLIF: <management LIF>
backendName: ontap-san
svm: <SVM name>
credentials:
name: backend-tbc-ontap-san-secret
Use the following yaml file to create trident storage class configuration for using ontap san storage
Trident storage class for iSCSI
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ontap-san
provisioner: csi.trident.netapp.io
parameters:
backendType: "ontap-san"
media: "ssd"
provisioningType: "thin"
snapshots: "true"
allowVolumeExpansion: true
Install MTV
Now you can install the Migration Toolkit for virtualization (MTV). Refer to the instructions provided here for help with the installation.
The Migration Toolkit for Virtualization (MTV) user interface is integrated into the OpenShift web console.
You can refer here to start using the user interface for various tasks.
Create Source Provider
In order to migrate the RHEL VM from VMware to OpenShift Virtualization, you need to first create the source provider for VMware. Refer to the instructions here to create the source provider.
You need the following to create your VMware source provider:
-
VCenter url
-
VCenter Credentials
-
VCenter server thumbprint
-
VDDK image in a repository
Sample source provider creation:
The Migration Toolkit for Virtualization (MTV) uses the VMware Virtual Disk Development Kit (VDDK) SDK to accelerate transferring virtual disks from VMware vSphere. Therefore, creating a VDDK image, although optional, is highly recommended. To make use of this feature, you download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. |
Follow the instructions provided here to create and push the VDDK image to a registry accessible from the OpenShift Cluster.
Create Destination provider
The host cluster is automatically added as the OpenShift virtualization provider is the source provider.
Create Migration Plan
Follow the instructions provided here to create a migration plan.
While creating a plan, you need to create the following if not already created:
-
A network mapping to map the source network to the target network.
-
A storage mapping to map the source datastore to the target storage class. For this you can choose ontap-san storage class.
Once the migration plan is created, the status of the plan should show Ready and you should now be able to Start the plan.
Clicking on Start will run through a sequence of steps to complete the migration of the VM.
When all steps are completed, you can see the migrated VMs by clicking on the virtual machines under Virtualization in the left-side navigation menu.
Instructions to access the virtual machines are provided here.
You can log into the virtual machine and verify the contents of the posgresql databases. The databases, tables and the entries in the table should be the same as what was created on the source VM.