Work with snapshots
You can create Kubernetes VolumeSnapshots (volume snapshot) of Persistent Volumes (PVs) to maintain point-in-time copies of Astra Trident volumes. Additionally, you can create a new volume, also known as a clone, from an existing volume snapshot. Volume snapshot is supported by ontap-nas
, ontap-nas-flexgroup
, ontap-san
, ontap-san-economy
, solidfire-san
, gcp-cvs
, and azure-netapp-files
drivers.
You must have an external snapshot controller and Custom Resource Definitions (CRDs). This is the responsibility of the Kubernetes orchestrator (for example: Kubeadm, GKE, OpenShift).
If your Kubernetes distribution does not include the snapshot controller and CRDs, refer to Deploying a volume snapshot controller.
Don't create a snapshot controller if creating on-demand volume snapshots in a GKE environment. GKE uses a built-in, hidden snapshot controller. |
Step 1: Create a VolumeSnapshotClass
This example creates a volume snapshot class.
cat snap-sc.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass driver: csi.trident.netapp.io deletionPolicy: Delete
The driver
points to the Astra Trident CSI driver. deletionPolicy
can be Delete
or Retain
. When set to Retain
, the underlying physical snapshot on the storage cluster is retained even when the VolumeSnapshot
object is deleted.
For more information, refer to VolumeSnapshotClass
.
Step 2: Create a snapshot of an existing PVC
This example creates a snapshot of an existing PVC.
cat snap.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: pvc1-snap spec: volumeSnapshotClassName: csi-snapclass source: persistentVolumeClaimName: pvc1
In this example, the snapshot is created for a PVC named pvc1
and the name of the snapshot is set to pvc1-snap
.
kubectl create -f snap.yaml volumesnapshot.snapshot.storage.k8s.io/pvc1-snap created kubectl get volumesnapshots NAME AGE pvc1-snap 50s
This created a VolumeSnapshot
object. A VolumeSnapshot is analogous to a PVC and is associated with a VolumeSnapshotContent
object that represents the actual snapshot.
It is possible to identify the VolumeSnapshotContent
object for the pvc1-snap
VolumeSnapshot by describing it.
kubectl describe volumesnapshots pvc1-snap Name: pvc1-snap Namespace: default . . . Spec: Snapshot Class Name: pvc1-snap Snapshot Content Name: snapcontent-e8d8a0ca-9826-11e9-9807-525400f3f660 Source: API Group: Kind: PersistentVolumeClaim Name: pvc1 Status: Creation Time: 2019-06-26T15:27:29Z Ready To Use: true Restore Size: 3Gi . .
The Snapshot Content Name
identifies the VolumeSnapshotContent object which serves this snapshot. The Ready To Use
parameter indicates that the Snapshot can be used to create a new PVC.
Step 3: Create PVCs from VolumeSnapshots
This example creates a PVC using a snapshot.
cat pvc-from-snap.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-from-snap spec: accessModes: - ReadWriteOnce storageClassName: golden resources: requests: storage: 3Gi dataSource: name: pvc1-snap kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io
dataSource
shows that the PVC must be created using a VolumeSnapshot named pvc1-snap
as the source of the data. This instructs Astra Trident to create a PVC from the snapshot. After the PVC is created, it can be attached to a pod and used just like any other PVC.
The PVC must be created in the same namespace as its dataSource .
|
Deleting a PV with snapshots
When deleting a Persistent Volume with associated snapshots, the corresponding Trident volume is updated to a “Deleting state”. Remove the volume snapshots to delete the Astra Trident volume.
Deploying a volume snapshot controller
If your Kubernetes distribution does not include the snapshot controller and CRDs, you can deploy them as follows.
-
Create volume snapshot CRDs.
cat snapshot-setup.sh #!/bin/bash # Create volume snapshot CRDs kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
-
Create the snapshot controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
If necessary, open deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
and updatenamespace
to your namespace.
Recover volume data using snapshots
The snapshot directory is hidden by default to facilitate maximum compatibility of volumes provisioned using the ontap-nas
and ontap-nas-economy
drivers. Enable the .snapshot
directory to recover data from snapshots directly.
Use the volume snapshot restore ONTAP CLI to to restore a volume to a state recorded in a prior snapshot.
cluster1::*> volume snapshot restore -vserver vs0 -volume vol3 -snapshot vol3_snap_archive
When you restore a snapshot copy, the existing volume configuration is overwritten. Changes made to volume data after the snapshot copy was created are lost. |