Kubernetes cluster requirements for NetApp Disaster Recovery
Before configuring NetApp Disaster Recovery for Kubernetes clusters, you must prepare each Kubernetes cluster. Typically, protection applies to a source and separate destination cluster, meaning these procedures need to be applied to each cluster in the pair.
Prerequisites
Before configuring Kubernetes cluster requirements, ensure you've configured your Kubernetes cluster and ONTAP clusters.
Disaster Recovery supports any version of Kubernetes currently supported by Trident Protect.
Kubernetes clusters
For each Kubernetes cluster, ensure:
-
You have administrator
kubectlaccess to each Kubernetes cluster. -
Helm 3 is available where you run install commands
ONTAP requirements
For each ONTAP cluster, ensure you've configured the following resources:
-
Management LIF - this is used by Trident for management API access
-
Data (NFS) LIF - this is used for NFS traffic for volumes
-
SVM name - the storage VM hosting volumes
-
Credentials - The account Trident will use (generally,
adminor an SVM-scoped account) -
Worker nodes must be able to reach ONTAP management and data LIFs.
-
If you use
autoExportPolicywith CIDR restrictions, include your node subnets.
Install NetApp Trident CSI
If you've already installed Trident, verify the installation with the command
kubectl get pods -n trident. If the installation is successful, you see the Trident controller, node pods (DaemonSet), and the operator's state is Running state after a few minutes.
Configure ONTAP backend and Trident StorageClass
Create a Kubernetes secret
Create a Kubernetes secret in the namespace:
+
kubectl create secret generic trident-ontap-secret -n <namespace> \
--from-literal=username=<adminOrOtherUsername> \
--from-literal=password='<YOUR_ONTAP_PASSWORD>'
For more information, see Create a Kubernetes secret in the namespace.
Create the TridentBackendConfig
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: <name>
namespace: <trident>
spec:
version: 1
backendName: <name>
storageDriverName: <driverName>
managementLIF: <MANAGEMENT_LIF_IP>
dataLIF: <DATA_LIF_IP>
svm: <SVM_NAME>
autoExportPolicy: true
autoExportCIDRs:
- 0.0.0.0/0
credentials:
name: trident-ontap-secret
You can verify the configuration with the command kubectl get TridentBackendConfig -n trident. If the configuration was successful, the phase output should display as Bound and the status should display as Success. If the status is Failed, review the steps to Get more details then resolve any issues such as incorrect credentials or network reachability.
Configure the storage class
Create the storage class object. Use the provisioner csi.trident.netapp.io. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <name>
provisioner: csi.trident.netapp.io
parameters:
backendType: "<type>"
storagePools: "<pool>:.*"
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
You can optionally mark one storage class as the cluster default. You can only designate one default per cluster.
kubectl patch storageclass ontap-backend \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Application PVCs that Trident Protect replicates must use storage provisioned through Trident (this StorageClass or another you define with correct storagePools / selectors).
For Trident Protect to replicate your application data, the applications must use storage that comes from Trident-managed ONTAP volumes. You can use this storage class or another, but it must be configured to use the ONTAP backend through Trident.
Configure volume group snapshot custom resource definitions (CRD)
Install the snapshot CRD and snapshot controller.. These installations are required for volume snapshots in Trident Protect.
Verify the custom resource definitions with the command:
kubectl get crd volumesnapshots.snapshot.storage.k8s.io
kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io
kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
Configure volume group snapshots
kubectl patch volumesnapshotclasses.snapshot.storage.k8s.io trident-snapshotclass \
-p '{"metadata": {"annotations":{"snapshot.storage.kubernetes.io/is-default-class":"true"}}}' \
--type=merge
You can optionally designate the volume snapshot class as the default class. Use deletionPolicy: Retain if you need orphaned Kubernetes snapshot objects to leave snapshots on ONTAP.
Verification summary
| Check | Command | Expected output |
|---|---|---|
Trident is running |
|
Trident is successfully running |
Backend is healthy |
|
Bound/success |
Storage is exposed |
|
Output includes your Trident class |
Snapshot APIs |
|
CRDs exist; lists Trident driver |
After verifying the status of all resources, deploy or migrate your workloads using your Trident StorageClass. When you add Kubernetes clusters to a site, Disaster Recovery provides instructions to install Trident Protect on a cluster and register it with your Disaster Recovery environment.
Review the instructions to install Trident Protect.
-
Create a Trident protect namespace:
kubectl create namespace trident-protect -
Create a Kubernetes secret using the client ID and client secret to create the OCCM authentication credentials.
kubectl create secret generic occmauthcreds --namespace=trident-protect --from-literal=client_id=<clientID> --from-literal=<clientSecret> -
Add or update the helm repo:
helm repo add --force-update netapp-trident-protect https://netapp.github.io/trident-protect-helm-chart -
Install or upgrade Trident Protect and Trident Protect Connector:
helm upgrade --install trident-protect netapp-trident-protect/trident-protect-console \
--version 100.2605.0-console --namespace trident-protect --set clusterName=<clusterName> --set trident-protect.cbs.accountID=<accountID> --set trident-protect.cbs.agentID=<agentID> --set trident-protect.cbs.proxySecretName=occmauthcreds --set trident-protect.cbs.proxyHostIP=<IPaddress>