Skip to main content
A newer release of this product is available.

Work with snapshots

Contributors juliantap

Kubernetes volume snapshots of Persistent Volumes (PVs) enable point-in-time copies of volumes. You can create a snapshot of a volume created using Astra Trident, import a snapshot created outside of Astra Trident, create a new volume from an existing snapshot, and recover volume data from snapshots.

Overview

Volume snapshot is supported by ontap-nas, ontap-nas-flexgroup, ontap-san, ontap-san-economy, solidfire-san, gcp-cvs, and azure-netapp-files drivers.

Before you begin

You must have an external snapshot controller and Custom Resource Definitions (CRDs) to work with snapshots. This is the responsibility of the Kubernetes orchestrator (for example: Kubeadm, GKE, OpenShift).

If your Kubernetes distribution does not include the snapshot controller and CRDs, refer to Deploy a volume snapshot controller.

Note Don't create a snapshot controller if creating on-demand volume snapshots in a GKE environment. GKE uses a built-in, hidden snapshot controller.

Create a volume snapshot

Steps
  1. Create a VolumeSnapshotClass. For more information, refer to VolumeSnapshotClass.

    • The driver points to the Astra Trident CSI driver.

    • deletionPolicy can be Delete or Retain. When set to Retain, the underlying physical snapshot on the storage cluster is retained even when the VolumeSnapshot object is deleted.

      Example
      cat snap-sc.yaml
      apiVersion: snapshot.storage.k8s.io/v1
      kind: VolumeSnapshotClass
      metadata:
        name: csi-snapclass
      driver: csi.trident.netapp.io
      deletionPolicy: Delete
  2. Create a snapshot of an existing PVC.

    Examples
    • This example creates a snapshot of an existing PVC.

      cat snap.yaml
      apiVersion: snapshot.storage.k8s.io/v1
      kind: VolumeSnapshot
      metadata:
        name: pvc1-snap
      spec:
        volumeSnapshotClassName: csi-snapclass
        source:
          persistentVolumeClaimName: pvc1
    • This example creates a volume snapshot object for a PVC named pvc1 and the name of the snapshot is set to pvc1-snap. A VolumeSnapshot is analogous to a PVC and is associated with a VolumeSnapshotContent object that represents the actual snapshot.

      kubectl create -f snap.yaml
      volumesnapshot.snapshot.storage.k8s.io/pvc1-snap created
      
      kubectl get volumesnapshots
      NAME                   AGE
      pvc1-snap              50s
    • You can identify the VolumeSnapshotContent object for the pvc1-snap VolumeSnapshot by describing it. The Snapshot Content Name identifies the VolumeSnapshotContent object which serves this snapshot. The Ready To Use parameter indicates that the snapshot can be used to create a new PVC.

      kubectl describe volumesnapshots pvc1-snap
      Name:         pvc1-snap
      Namespace:    default
      .
      .
      .
      Spec:
        Snapshot Class Name:    pvc1-snap
        Snapshot Content Name:  snapcontent-e8d8a0ca-9826-11e9-9807-525400f3f660
        Source:
          API Group:
          Kind:       PersistentVolumeClaim
          Name:       pvc1
      Status:
        Creation Time:  2019-06-26T15:27:29Z
        Ready To Use:   true
        Restore Size:   3Gi
      .
      .

Create a PVC from a volume snapshot

You can use dataSource to create a PVC using a VolumeSnapshot named <pvc-name> as the source of the data. After the PVC is created, it can be attached to a pod and used just like any other PVC.

Warning The PVC will be created in the same backend as the source volume. Refer to KB: Creating a PVC from a Trident PVC Snapshot cannot be created in an alternate backend.

The following example creates the PVC using pvc1-snap as the data source.

cat pvc-from-snap.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-from-snap
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: golden
  resources:
    requests:
      storage: 3Gi
  dataSource:
    name: pvc1-snap
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io

Import a volume snapshot

Astra Trident supports the Kubernetes pre-provisioned snapshot process to enable the cluster administrator to create a VolumeSnapshotContent object and import snapshots created outside of Astra Trident.

Before you begin

Astra Trident must have created or imported the snapshot's parent volume.

Steps
  1. Cluster admin: Create a VolumeSnapshotContent object that references the backend snapshot. This initiates the snapshot workflow in Astra Trident.

    • Specify the name of the backend snapshot in annotations as trident.netapp.io/internalSnapshotName: <"backend-snapshot-name">.

    • Specify <name-of-parent-volume-in-trident>/<volume-snapshot-content-name> in snapshotHandle. This is the only information provided to Astra Trident by the external snapshotter in the ListSnapshots call.

      Note The <volumeSnapshotContentName> cannot always match the backend snapshot name due to CR naming constraints.
      Example

      The following example creates a VolumeSnapshotContent object that references backend snapshot snap-01.

      apiVersion: snapshot.storage.k8s.io/v1
      kind: VolumeSnapshotContent
      metadata:
        name: import-snap-content
        annotations:
          trident.netapp.io/internalSnapshotName: "snap-01"  # This is the name of the snapshot on the backend
      spec:
        deletionPolicy: Retain
        driver: csi.trident.netapp.io
        source:
          snapshotHandle: pvc-f71223b5-23b9-4235-bbfe-e269ac7b84b0/import-snap-content # <import PV name or source PV name>/<volume-snapshot-content-name>
  2. Cluster admin: Create the VolumeSnapshot CR that references the VolumeSnapshotContent object. This requests access to use the VolumeSnapshot in a given namespace.

    Example

    The following example creates a VolumeSnapshot CR named import-snap that references the VolumeSnapshotContent named import-snap-content.

    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: import-snap
    spec:
      # volumeSnapshotClassName: csi-snapclass (not required for pre-provisioned or imported snapshots)
      source:
        volumeSnapshotContentName: import-snap-content
  3. Internal processing (no action required): The external snapshotter recognizes the newly created VolumeSnapshotContent and runs the ListSnapshots call. Astra Trident creates the TridentSnapshot.

    • The external snapshotter sets the VolumeSnapshotContent to readyToUse and the VolumeSnapshot to true.

    • Trident returns readyToUse=true.

  4. Any user: Create a PersistentVolumeClaim to reference the new VolumeSnapshot, where the spec.dataSource (or spec.dataSourceRef) name is the VolumeSnapshot name.

    Example

    The following example creates a PVC referencing the VolumeSnapshot named import-snap.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-from-snap
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: simple-sc
      resources:
        requests:
          storage: 1Gi
      dataSource:
        name: import-snap
        kind: VolumeSnapshot
        apiGroup: snapshot.storage.k8s.io

Recover volume data using snapshots

The snapshot directory is hidden by default to facilitate maximum compatibility of volumes provisioned using the ontap-nas and ontap-nas-economy drivers. Enable the .snapshot directory to recover data from snapshots directly.

Use the volume snapshot restore ONTAP CLI to to restore a volume to a state recorded in a prior snapshot.

cluster1::*> volume snapshot restore -vserver vs0 -volume vol3 -snapshot vol3_snap_archive
Note When you restore a snapshot copy, the existing volume configuration is overwritten. Changes made to volume data after the snapshot copy was created are lost.

Delete a PV with associated snapshots

When deleting a Persistent Volume with associated snapshots, the corresponding Trident volume is updated to a “Deleting state”. Remove the volume snapshots to delete the Astra Trident volume.

Deploy a volume snapshot controller

If your Kubernetes distribution does not include the snapshot controller and CRDs, you can deploy them as follows.

Steps
  1. Create volume snapshot CRDs.

    cat snapshot-setup.sh
    #!/bin/bash
    # Create volume snapshot CRDs
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
  2. Create the snapshot controller.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-6.1/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
    Note If necessary, open deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml and update namespace to your namespace.