Backup-to-object policy options in BlueXP backup and recovery
BlueXP backup and recovery enables you to create backup policies with a variety of settings for your on-premises ONTAP and Cloud Volumes ONTAP systems.
|
These policy settings are relevant for backup-to-object storage only. None of these settings affect your snapshot or replication policies. |
NOTE To switch to and from BlueXP backup and recovery workloads, refer to Switch to different BlueXP backup and recovery workloads.
Backup schedule options
BlueXP backup and recovery enables you to create multiple backup policies with unique schedules for each working environment (cluster). You can assign different backup policies to volumes that have different recovery point objectives (RPO).
Each backup policy provides a section for Labels & Retention that you can apply to your backup files. Note that the Snapshot policy applied to the volume must be one of the policies recognized by BlueXP backup and recovery or backup files will not be created.
There are two parts of the schedule; the Label and the Retention value:
-
The label defines how often a backup file is created (or updated) from the volume. You can select among the following types of labels:
-
You can choose one, or a combination of, hourly, daily, weekly, monthly, and yearly timeframes.
-
You can select one of the system-defined policies that provide backup and retention for 3 months, 1 year, or 7 years.
-
If you have created custom backup protection policies on the cluster using ONTAP System Manager or the ONTAP CLI, you can select one of those policies.
-
-
The retention value defines how many backup files for each label (timeframe) are retained. Once the maximum number of backups have been reached in a category, or interval, older backups are removed so you always have the most current backups. This also saves you storage costs because obsolete backups don't continue to take up space in the cloud.
For example, say you create a backup policy that creates 7 weekly and 12 monthly backups:
-
each week and each month a backup file is created for the volume
-
at the 8th week, the first weekly backup is removed, and the new weekly backup for the 8th week is added (keeping a maximum of 7 weekly backups)
-
at the 13th month, the first monthly backup is removed, and the new monthly backup for the 13th month is added (keeping a maximum of 12 monthly backups)
Yearly backups are deleted automatically from the source system after being transferred to object storage. This default behavior can be changed in the Advanced Settings page for the Working Environment.
DataLock and Ransomware protection options
BlueXP backup and recovery provides support for DataLock and Ransomware protection for your volume backups. These features enable you to lock your backup files and scan them to detect possible ransomware on the backup files. This is an optional setting that you can define in your backup policies when you want extra protection for your volume backups for a cluster.
Both of these features protect your backup files so that you'll always have a valid backup file to recover data from in case of a ransomware attack attempt on your backups. It's also helpful to meet certain regulatory requirements where backups need to be locked and retained for a certain period of time. When the DataLock and Ransomware Protection option is enabled, the cloud bucket that is provisioned as a part of BlueXP backup and recovery activation will have object locking and object versioning enabled.
This feature does not provide protection for your source volumes; only for the backups of those source volumes. Use some of the anti-ransomware protections provided from ONTAP to protect your source volumes.
|
|
What is DataLock
With this feature, you can lock the cloud snapshots replicated via SnapMirror to Cloud and also enable the feature to detect a ransomware attack and recover a consistent copy of the snapshot on the object store. This feature is supported on AWS, Azure, and StorageGRID.
DataLock protects your backup files from being modified or deleted for a certain period of time - also called immutable storage. This functionality uses technology from the object storage provider for "object locking."
Cloud providers use a Retention Until Date (RUD), which is calculated based on the Snapshot Retention Period. The Snapshot Retention Period is calculated based on the label and the retention count defined in the backup policy.
The minimum Snapshot Retention Period is 30 days. Let's look at some examples of how this works:
-
If you choose the Daily label with Retention Count 20, the Snapshot Retention Period is 20 days, which defaults to the minimum 30 days.
-
If you choose the Weekly label with Retention Count 4, the Snapshot Retention Period is 28 days, which defaults to the minimum of 30 days.
-
If you choose the Monthly label with Retention Count 3, the Snapshot Retention Period is 90 days.
-
If you choose the Yearly label with Retention Count 1, the Snapshot Retention Period is 365 days.
What is Retention Until Date (RUD) and how is it calculated?
The Retention Until Date (RUD) is determined based on the Snapshot Retention Period. The Retention Until Date is calculated by summing the Snapshot Retention Period and a Buffer.
-
Buffer is the Buffer for Transfer Time (3 days) + Buffer for Cost Optimization (28 days), which totals as 31 days.
-
The minimum Retention Until Date is 30 days + 31 days buffer = 61 days.
Here are some examples:
-
If you create a Monthly backup schedule with 12 retentions, your backups are locked for 12 months (plus 31 days) before they are deleted (replaced by the next backup file).
-
If you create a backup policy that creates 30 daily, 7 weekly, 12 monthly backups, there are three locked retention periods:
-
The "30 daily" backups are retained for 61 days (30 days plus 31 days buffer),
-
The "7 weekly" backups are retained for 11 weeks (7 weeks plus 31 days), and
-
The "12 monthly" backups are retained for 12 months (plus 31 days).
-
-
If you create an Hourly backup schedule with 24 retentions, you might think that backups are locked for 24 hours. However, since that is less than the minimum of 30 days, each backup will be locked and retained for 61 days (30 days plus 31 days buffer).
|
Old backups are deleted after the DataLock Retention Period expires, not after the backup policy retention period. |
The DataLock retention setting overrides the policy retention setting from your backup policy. This could affect your storage costs as your backup files will be saved in the object store for a longer period of time.
Enable DataLock and Ransomware protection
You can enable DataLock and Ransomware protection when you create a policy. You cannot enable, modify, or disable this after the policy is created.
-
When you create a policy, expand the DataLock and Ransomware Protection section.
-
Choose one of the following:
-
None: DataLock protection and ransomware protection are disabled.
-
Unlocked: DataLock protection and ransomware protection are enabled. Users with specific permissions can overwrite or delete protected backup files during the retention period.
-
Locked: DataLock protection and ransomware protection are enabled. No users can overwrite or delete protected backup files during the retention period. This satisfies full regulatory compliance.
-
What is Ransomware protection in NetApp Backup and Recovery
The Ransomware protection option in NetApp Backup and Recovery scans your backup files to look for evidence of a ransomware attack. The detection of ransomware attacks is performed using a checksum comparison. If potential ransomware is identified in a new backup file versus the previous backup file, that newer backup file is replaced by the most recent backup file that does not show any signs of a ransomware attack. (The file that was identified as having a ransomware attack is deleted 1 day after it has been replaced.)
Scans occur in these situations:
-
Scans on cloud backup objects are initiated soon after they are transferred to the cloud object storage. The scan is not performed on the backup file when it is first written to cloud storage, but when the next backup file is written.
-
Ransomware scans can be initiated when the backup is selected for the restore process.
-
Scans can be performed on-demand at any time.
How does the recovery process work?
When a ransomware attack is detected, the service uses the Active Data Connector Integrity Checker REST API to start the recovery process. The oldest version of the data objects is the source of truth and is made into the current version as part of the recovery process.
Let's see how this works:
-
In the event of a ransomware attack, the service tries to overwrite or delete the object in the bucket.
-
Because the cloud storage is versioning-enabled, it automatically creates a new version of the backup object. If an object is deleted with versioning turned on, it is marked as deleted but is still retrievable. If an object is overwritten, previous versions are stored and marked.
-
When a ransomware scan is initiated, the checksums are validated for both object versions and compared. If the checksums are inconsistent, potential ransomware has been detected.
-
The recovery process involves reverting to the last known good copy.
Supported working environments and object storage providers
You can enable DataLock and Ransomware protection on ONTAP volumes from the following working environments when using object storage in the following public and private cloud providers. Additional cloud providers will be added in future releases.
Source Working Environment | Backup File Destination |
---|---|
Cloud Volumes ONTAP in AWS |
Amazon S3 |
Cloud Volumes ONTAP in Azure |
Azure Blob |
On-premises ONTAP system |
Amazon S3 |
Requirements
-
For AWS:
-
Your clusters must running ONTAP 9.11.1 or greater
-
The Connector can be deployed in the cloud or on your premises
-
The following S3 permissions must be part of the IAM role that provides the Connector with permissions. They reside in the "backupS3Policy" section for the resource "arn:aws:s3:::netapp-backup-*":
AWS S3 permissions
-
s3:GetObjectVersionTagging
-
s3:GetBucketObjectLockConfiguration
-
s3:GetObjectVersionAcl
-
s3:PutObjectTagging
-
s3:DeleteObject
-
s3:DeleteObjectTagging
-
s3:GetObjectRetention
-
s3:DeleteObjectVersionTagging
-
s3:PutObject
-
s3:GetObject
-
s3:PutBucketObjectLockConfiguration
-
s3:GetLifecycleConfiguration
-
s3:GetBucketTagging
-
s3:DeleteObjectVersion
-
s3:ListBucketVersions
-
s3:ListBucket
-
s3:PutBucketTagging
-
s3:GetObjectTagging
-
s3:PutBucketVersioning
-
s3:PutObjectVersionTagging
-
s3:GetBucketVersioning
-
s3:GetBucketAcl
-
s3:BypassGovernanceRetention
-
s3:PutObjectRetention
-
s3:GetBucketLocation
-
s3:GetObjectVersion
-
-
-
For Azure:
-
Your clusters must running ONTAP 9.12.1 or greater
-
The Connector can be deployed in the cloud or on your premises
-
-
For StorageGRID:
-
Your clusters must running ONTAP 9.11.1 or greater
-
Your StorageGRID systems must be running 11.6.0.3 or greater
-
The Connector must be deployed on your premises (it can be installed in a site with or without internet access)
-
The following S3 permissions must be part of the IAM role that provides the Connector with permissions:
StorageGRID S3 permissions
-
s3:GetObjectVersionTagging
-
s3:GetBucketObjectLockConfiguration
-
s3:GetObjectVersionAcl
-
s3:PutObjectTagging
-
s3:DeleteObject
-
s3:DeleteObjectTagging
-
s3:GetObjectRetention
-
s3:DeleteObjectVersionTagging
-
s3:PutObject
-
s3:GetObject
-
s3:PutBucketObjectLockConfiguration
-
s3:GetLifecycleConfiguration
-
s3:GetBucketTagging
-
s3:DeleteObjectVersion
-
s3:ListBucketVersions
-
s3:ListBucket
-
s3:PutBucketTagging
-
s3:GetObjectTagging
-
s3:PutBucketVersioning
-
s3:PutObjectVersionTagging
-
s3:GetBucketVersioning
-
s3:GetBucketAcl
-
s3:PutObjectRetention
-
s3:GetBucketLocation
-
s3:GetObjectVersion
-
-
Restrictions
-
The DataLock and Ransomware protection feature is not available if you have configured archival storage in the backup policy.
-
The DataLock option you select when activating BlueXP backup and recovery must be used for all backup policies for that cluster.
-
You cannot use multiple DataLock modes on a single cluster.
-
If you enable DataLock, all volume backups will be locked. You can't mix locked and non-locked volume backups for a single cluster.
-
DataLock and Ransomware protection is applicable for new volume backups using a backup policy with DataLock and Ransomware protection enabled. You can later enable or disable the Ransomware scanning option using the Advanced Settings option.
-
FlexGroup volumes can use DataLock and Ransomware protection only when using ONTAP 9.13.1 or greater.
Tips on how to mitigate DataLock costs
You can enable or disable the Ransomware Scan feature while keeping the DataLock feature active. To avoid extra charges, you can disable scheduled ransomware scans. This lets you customize your security settings and avoid incurring costs from the cloud provider.
Even if scheduled ransomware scans are disabled, you can still perform on-demand scans when needed.
You can choose different levels of protection:
-
DataLock without ransomware scans: Provides protection for backup data in the destination storage that can be either in Governance or Compliance mode.
-
Governance mode: Offers flexibility to administrators to overwrite or delete protected data.
-
Compliance mode: Provides complete indelibility until the retention period expires. This helps meet the most stringent data security requirements of highly regulated environments. The data cannot be overwritten or modified during its lifecycle, providing the strongest level of protection for your backup copies.
Microsoft Azure uses a Lock and Unlock mode instead.
-
-
DataLock with ransomware scans: Provides an additional layer of security for your data. This feature helps detect any attempts to change backup copies. If any attempt is made, a new version of the data is created discreetly. The scan frequency can be changed to 1, 2, 3, 4, 5, 6, or 7 days. If scans are set to every 7 days, the costs decrease significantly.
For more tips to mitigate DataLock costs, refer to
https://community.netapp.com/t5/Tech-ONTAP-Blogs/Understanding-BlueXP-Backup-and-Recovery-DataLock-and-Ransomware-Feature-TCO/ba-p/453475
Additionally, you can get estimates for the cost associated with DataLock by visiting the BlueXP backup and recovery Total Cost of Ownership (TCO) calculator.
Archival storage options
When using AWS, Azure, or Google cloud storage, you can move older backup files to a less expensive archival storage class or access tier after a certain number of days. You can also choose to send your backup files to archival storage immediately without being written to standard cloud storage. Just enter 0 as the "Archive After Days" to send your backup file directly to archival storage. This can be especially helpful for users who rarely need to access data from cloud backups or users who are replacing a backup to tape solution.
Data in archival tiers can't be accessed immediately when needed, and will require a higher retrieval cost, so you'll need to consider how often you may need to restore data from backup files before deciding to archive your backup files.
|
|
Each backup policy provides a section for Archival Policy that you can apply to your backup files.
-
In AWS, backups start in the Standard storage class and transition to the Standard-Infrequent Access storage class after 30 days.
If your cluster is using ONTAP 9.10.1 or greater, you can tier older backups to either S3 Glacier or S3 Glacier Deep Archive storage. Learn more about AWS archival storage.
-
If you select no archive tier in your first backup policy when activating BlueXP backup and recovery, then S3 Glacier will be your only archive option for future policies.
-
If you select S3 Glacier in your first backup policy, then you can change to the S3 Glacier Deep Archive tier for future backup policies for that cluster.
-
If you select S3 Glacier Deep Archive in your first backup policy, then that tier will be the only archive tier available for future backup policies for that cluster.
-
-
In Azure, backups are associated with the Cool access tier.
If your cluster is using ONTAP 9.10.1 or greater, you can tier older backups to Azure Archive storage. Learn more about Azure archival storage.
-
In GCP, backups are associated with the Standard storage class.
If your on-prem cluster is using ONTAP 9.12.1 or greater, you can choose to tier older backups to Archive storage in the BlueXP backup and recovery UI after a certain number of days for further cost optimization. Learn more about Google archival storage.
-
In StorageGRID, backups are associated with the Standard storage class.
If your on-prem cluster is using ONTAP 9.12.1 or greater, and your StorageGRID system is using 11.4 or greater, you can archive older backup files to public cloud archival storage.
-
For AWS, you can tier backups to AWS S3 Glacier or S3 Glacier Deep Archive storage. Learn more about AWS archival storage.
-
For Azure, you can tier older backups to Azure Archive storage. Learn more about Azure archival storage.
-