Skip to main content

BlueXP tiering technical FAQ

Contributors netapp-tonacki netapp-bcammett

This FAQ can help if you're just looking for a quick answer to a question.

BlueXP tiering service

The following FAQs relate to how BlueXP tiering works.

What are the benefits of using the BlueXP tiering service?

BlueXP tiering addresses the challenges that come with rapid data growth, providing you with benefits such as:

  • Effortless data center extension to the cloud, providing up to 50x more space

  • Storage optimization, yielding an average storage savings of 70%

  • Reduced total cost of ownership by 30%, on average

  • No need to refactor applications

What kind of data is useful to tier to the cloud?

Essentially, any data that is considered inactive on both primary and secondary storage systems is a good target to move to the cloud. On primary systems, such data can include snapshots, historical records, and finished projects. On secondary systems, this includes all volumes that contain copies of primary data made for DR and backup purposes.

Can I tier data from both NAS volumes and SAN volumes?

Yes, you can tier data from NAS volumes to the public cloud or to private clouds, like StorageGRID. When tiering data that is accessed by SAN protocols, NetApp recommends using private clouds because SAN protocols are more sensitive to connectivity issues than NAS.

What is the definition of inactive data or infrequently used data, and how is that controlled?

The definition of what can also be referred to cold data is: "volume blocks (metadata excluded) that have not been accessed for some amount of time". The “amount of time” is determined by a tiering policy attribute named cooling-days.

Will BlueXP tiering retain my storage efficiency savings in the cloud tier?

Yes, the ONTAP volume-level storage efficiencies such as compression, deduplication, and compaction are preserved when moving data to the cloud tier.

What is the difference between FabricPool and BlueXP tiering?

FabricPool is the ONTAP tiering technology that can be self-managed through the ONTAP CLI and System Manager, or managed as-a-service through BlueXP tiering. BlueXP tiering turns FabricPool into a managed service with advanced automation processes, on both ONTAP and in the cloud, providing greater visibility and control over tiering across hybrid and multi-cloud deployments.

Can the data tiered to the cloud be used for disaster recovery or for backup/archive?

No. Since the volume's metadata is never tiered from the performance tier, the data stored in object storage cannot be accessed directly.

However, BlueXP tiering can be used to achieve cost-effective backup and DR by enabling it on secondary systems and SnapMirror destination volumes (DP volumes), to tier off all of the data (metadata excluded), thus reducing your data center footprint and TCO.

Is BlueXP tiering applied at the volume or aggregate level?

BlueXP tiering is enabled at the volume level by associating a tiering policy with each volume. Cold data identification is done at the block level.

How does BlueXP tiering determine which blocks to tier to the cloud?

The tiering policy associated with the volume is the mechanism that controls which blocks are tiered and when. The policy defines the type of data blocks (snapshots, user data, or both) and the cooling period. See Volume Tiering Policies for details.

How does BlueXP tiering affect the volume capacity?

BlueXP tiering has no effect on the volume's capacity but rather on the aggregate's performance tier usage.

Does BlueXP tiering enable Inactive Data Reporting?

Yes, BlueXP tiering enables Inactive Data Reporting (IDR) on each aggregate. This setting enables us to identify the amount of inactive data that can be tiered to low-cost object storage.

How long does it take IDR to show information from the moment I start running it?

IDR starts showing information after the configured cooling period has passed. Using ONTAP 9.7 and earlier, IDR had a non-adjustable cooling period of 31 days. Starting with ONTAP 9.8, the IDR cooling-period can be configured up to 183 days.

Licenses and Costs

The following FAQs relate to licensing and costs to use BlueXP tiering.

How much does using BlueXP tiering cost?

When tiering cold data to the public cloud:

  • For the pay-as-you-go (PAYGO), usage-based subscription: $0.05 per GB/Month.

  • For the annual (BYOL), term-based subscription: starting from $0.033 per GB/Month.

When tiering cold data to a NetApp StorageGRID system (private cloud) there is no cost.

Can I have both a BYOL and PAYGO license for the same ONTAP cluster?

Yes. BlueXP tiering allows you to use a BYOL license, a PAYGO subscription, or a combination of both.

What happens if I have reached the BYOL capacity limit?

If you reach the BYOL capacity limit, tiering of new cold data stops. All previously tiered data remains accessible; meaning you can retrieve and use this data. When retrieved, this data is moved back to the performance tier from the cloud.

However, if you have a PAYGO marketplace subscription to the BlueXP - Deploy & Manage Cloud Data Services, new cold data will continue to be tiered to object storage and you'll pay for those charges on a per-use basis.

Does the BlueXP tiering license include the egress charges from the cloud provider?

No, it does not.

Is rehydration of on-prem systems subject to the egress cost charged by the cloud providers?

Yes. All reads from the public cloud are subject to egress fees.

How can I estimate my cloud charges? Is there a “what if” mode for BlueXP tiering?

The best way to estimate how much a cloud provider will charge for hosting your data is to use their calculators: AWS, Azure and Google Cloud.

Are there any extra charges by the cloud providers for reading/retrieving data from the object storage to the on-prem storage?

Yes. Check Amazon S3 Pricing, Block Blob Pricing, and Cloud Storage Pricing for additional pricing incurred with data reading/retrieving.

How can I estimate my volumes' savings and get a cold data report before I enable BlueXP tiering?

To get an estimate, simply add your ONTAP cluster to BlueXP and inspect it through the BlueXP tiering Clusters page. Click Calculate potential tiering savings for the cluster to launch the BlueXP tiering TCO calculator to see how much money you can save.

ONTAP

The following questions relate to ONTAP.

Which ONTAP versions does BlueXP tiering support?

BlueXP tiering supports ONTAP version 9.2 and higher.

What types of ONTAP systems are supported?

BlueXP tiering is supported with single-node and high-availability AFF, FAS, and ONTAP Select clusters. Clusters in FabricPool Mirror configurations and MetroCluster configurations are also supported.

Can I tier data from FAS systems with HDDs only?

Yes, starting ONTAP 9.8 you can tier data from volumes hosted on HDD aggregates.

Can I tier data from an AFF joined to a cluster that has FAS nodes with HDDs?

Yes. BlueXP tiering can be configured to tier volumes hosted on any aggregate. The data tiering configuration is irrelevant to the type of controller used and whether the cluster is heterogeneous or not.

What about Cloud Volumes ONTAP?

If you have Cloud Volumes ONTAP systems, you'll find them in the BlueXP tiering Clusters page so you get a full view of data tiering in your hybrid cloud infrastructure. However, Cloud Volumes ONTAP systems are read-only from BlueXP tiering. You can't set up data tiering on Cloud Volumes ONTAP from BlueXP tiering. You set up tiering for Cloud Volumes ONTAP systems from the working environment in BlueXP.

What other requirements are necessary for my ONTAP clusters?

It depends on where you tier the cold data. Refer to the following links for more details:

Object storage

The following questions relate to object storage.

Which object storage providers are supported?

BlueXP tiering supports the following object storage providers:

  • Amazon S3

  • Microsoft Azure Blob

  • Google Cloud Storage

  • NetApp StorageGRID

  • S3-compatible object storage (for example, MinIO)

  • IBM Cloud Object Storage (the FabricPool configuration must be done using System Manager or the ONTAP CLI)

Can I use my own bucket/container?

Yes, you can. When you set up data tiering, you have the choice to add a new bucket/container or to select an existing bucket/container.

Which S3 storage classes are supported?

BlueXP tiering supports data tiering to the Standard, Standard-Infrequent Access, One Zone-Infrequent Access, Intelligent Tiering, and Glacier Instant Retrieval storage classes. See Supported S3 storage classes for more details.

Why are Amazon S3 Glacier Flexible and S3 Glacier Deep Archive not supported by BlueXP tiering?

The main reason Amazon S3 Glacier Flexible and S3 Glacier Deep Archive aren't supported is that BlueXP tiering is designed as a high-performance tiering solution, so data must be continuously available and quickly accessible for retrieval. With S3 Glacier Flexible and S3 Glacier Deep Archive, data retrieval can last anywhere between a few minutes to 48 hours.

Can I use other S3-compatible object storage services, such as MinIO, with BlueXP tiering?

Yes, configuring S3-compatible object storage through the Tiering UI is supported for clusters using ONTAP 9.8 and later. See the details here.

Which Azure Blob access tiers are supported?

BlueXP tiering supports data tiering to the Hot or Cool access tiers for your inactive data. See Supported Azure Blob access tiers for more details.

Which storage classes are supported for Google Cloud Storage?

BlueXP tiering supports data tiering to the Standard, Nearline, Coldline, and Archive storage classes. See Supported Google Cloud storage classes for more details.

Does BlueXP tiering support the use of lifecycle management policies?

Yes. You can enable lifecycle management so that BlueXP tiering transitions data from the default storage class/access tier to a more cost-effective tier after a certain number of days. The lifecycle rule is applied to all objects in the selected bucket for Amazon S3 and Google Cloud storage, and to all containers in the selected storage account for Azure Blob.

Does BlueXP tiering use one object store for the entire cluster or one per aggregate?

In a typical configuration there is one object store for the entire cluster. Starting in August 2022, you can use the Advanced Setup page to add additional object stores for a cluster, and then attach different object stores to different aggregates, or attach 2 object stores to an aggregate for mirroring.

Can multiple buckets be attached to the same aggregate?

It is possible to attach up to two buckets per aggregate for the purpose of mirroring, where cold data is synchronously tiered to both buckets. The buckets can be from different providers and different locations. Starting in August 2022, you can use the Advanced Setup page to attach two object stores to a single aggregate.

Can different buckets be attached to different aggregates in the same cluster?

Yes. The general best practice is to attach a single bucket to multiple aggregates. However, when using the public cloud there is a maximum IOPS limitation for the object storage services, therefore multiple buckets must be considered.

What happens with the tiered data when you migrate a volume from one cluster to another?

When migrating a volume from one cluster to another, all the cold data is read from the cloud tier. The write location on the destination cluster depends on whether tiering was enabled and the type of tiering policy used on the source and destination volumes.

What happens with the tiered data when you move a volume from one node to another in the same cluster?

If the destination aggregate does not have an attached cloud tier, data is read from the cloud tier of the source aggregate and written entirely to the local tier of the destination aggregate. If the destination aggregate has an attached cloud tier, data is read from the cloud tier of the source aggregate and first written to the local tier of the destination aggregate, to facilitate quick cutover. Later, based on the tiering policy used, it is written to the cloud tier.

Starting with ONTAP 9.6, if the destination aggregate is using the same cloud tier as the source aggregate, the cold data does not move back to the local tier.

How can I bring my tiered data back on-prem to the performance tier?

Write back is generally performed on reads and depends on the tiering policy type. Prior to ONTAP 9.8, writing back of the entire volume can be done with a volume move operation. Starting with ONTAP 9.8, the Tiering UI has options to Bring back all data or Bring back active file system. See how to move data back to the performance tier.

When replacing an existing AFF/FAS controller with a new one, would the tiered data be migrated back on-prem?

No. During the “head swap” procedure, the only thing that changes is the aggregate's ownership. In this case, it will be changed to the new controller without any data movement.

Can I use the cloud provider's console or object storage explorers to look at the data tiered to a bucket? Can I use the data stored in the object storage directly without ONTAP?

No. The objects constructed and tiered to the cloud do not contain a single file but up to 1,024 4KB blocks from multiple files. A volume's metadata always remains on the local tier.

Connectors

The following questions relate to the BlueXP Connector.

What is the Connector?

The Connector is software running on a compute instance either within your cloud account, or on-premises, that enables BlueXP to securely manage cloud resources. To use the BlueXP tiering service, you must deploy a Connector.

Where does the Connector need to be installed?

  • When tiering data to S3, the Connector can reside in an AWS VPC or on your premises.

  • When tiering data to Blob storage, the Connector can reside in an Azure VNet or on your premises.

  • When tiering data to Google Cloud Storage, the Connector must reside in a Google Cloud Platform VPC.

  • When tiering data to StorageGRID or other S3-Compatible storage providers, the Connector must reside on your premises.

Can I deploy the Connector on-premises?

Yes. The Connector software can be downloaded and manually installed on a Linux host in your network. See how to install the Connector in your premises.

Is an account with a cloud service provider required before using BlueXP tiering?

Yes. You must have an account before you can define the object storage that you want to use. An account with a cloud storage provider is also required when setting up the Connector in the cloud on a VPC or VNet.

What are the implications if the Connector fails?

In the case of a Connector failure, only the visibility into the tiered environments is impacted. All the data is accessible and newly identified cold data is automatically tiered to object storage.

Tiering policies

What are the available tiering policies?

There are four tiering policies:

  • None: Classifies all data as always hot; preventing any data from the volume being moved to object storage.

  • Cold Snapshots (Snapshot-only): Only cold snapshot blocks are moved to object storage.

  • Cold User Data and Snapshots (Auto): Both cold snapshot blocks and cold user data blocks are moved to object storage.

  • All User Data (All): Classifies all data as cold; immediately moving the entire volume to object storage.

At which point is my data is considered cold?

Since data tiering is done at the block level, a data block is considered cold after it hasn't been accessed for a certain period of time, which is defined by the tiering policy's minimum-cooling-days attribute. The applicable range is 2-63 days with ONTAP 9.7 and earlier, or 2-183 days starting with ONTAP 9.8.

What is the default cooling period for data before it is tiered to the cloud tier?

The default cooling period for the Cold Snapshot policy is 2 days, while the default cooling period for Cold User Data and Snapshots is 31 days. The cooling-days parameter is not applicable to the All tiering policy.

Is all the tiered data retrieved from object storage when I do a full backup?

During full backup all the cold data is read. The retrieval of the data depends on the tiering policy used. When using the All and Cold User Data and Snapshots policies, the cold data is not written back to the performance tier. When using the Cold Snapshots policy, only in case of an old snapshot being used for the backup will its cold blocks be retrieved.

Can you choose a tiering size per volume?

No. However, you can choose which volumes are eligible for tiering, the type of data to be tiered, and its cooling period. This is done by associating a tiering policy with that volume.

Is the All User Data policy the only option for data protection volumes?

No. Data protection (DP) volumes can be associated with any of the three policies available. The type of policy used on the source and destination (DP) volumes determines the write location of the data.

Does resetting the tiering policy of a volume to None rehydrate the cold data or just prevent future cold blocks from being moved to the cloud?

No rehydration takes place when a tiering policy is reset, but it will prevent new cold blocks from being moved to the cloud tier.

After tiering data to the cloud, can I change the tiering policy?

Yes. The behavior after the change depends on the new associated policy.

What should I do if I want to ensure certain data is not moved to the cloud?

Do not associate a tiering policy with the volume containing that data.

Where is the metadata of the files stored?

The metadata of a volumes is always stored locally, on the performance tier — it is never tiered to the cloud.

Networking and security

The following questions relate to networking and security.

What are the networking requirements?

  • The ONTAP cluster initiates an HTTPS connection over port 443 to your object storage provider.

    ONTAP reads and writes data to and from object storage. The object storage never initiates, it just responds.

  • For StorageGRID, the ONTAP cluster initiates an HTTPS connection over a user-specified port to StorageGRID (the port is configurable during tiering setup).

  • A Connector needs an outbound HTTPS connection over port 443 to your ONTAP clusters, to the object store, and to the BlueXP tiering service.

For more details, see:

What tools can I use for monitoring and reporting in order to manage cold data stored in the cloud?

Other than BlueXP tiering, Active IQ Unified Manager and BlueXP digital advisor can be used for monitoring and reporting.

In case of a network failure, the local performance tier remains online and hot data remains accessible. However, blocks that were already moved to the cloud tier will be inaccessible and applications will receive an error message when trying to access that data. Once connectivity is restored, all data will be seamlessly accessible.

Is there a network bandwidth recommendation?

The underlying FabricPool tiering technology read latency depends on connectivity to the cloud tier. Although tiering works on any bandwidth, it is recommended to place intercluster LIFs on 10 Gbps ports to provide adequate performance. There are no recommendations or bandwidth limitations for the Connector.

Additionally, you can throttle the amount of network bandwidth that is used during the transfer of inactive data from the volume to object storage. The Maximum transfer rate setting is available when configuring your cluster for tiering, and afterwards from the Clusters page.

Is there any latency when a user attempts to access tiered data?

Yes. Cloud tiers cannot provide the same latency as the local tier since latency depends on the connectivity. To estimate the latency and throughput of an object store, BlueXP tiering provides a Cloud Performance Test (based on the ONTAP object store profiler) that can be used after the object store is attached and before tiering is set up.

How is my data secured?

AES-256-GCM encryption is maintained on both the performance and cloud tiers. TLS 1.2 encryption is used to encrypt data over the wire as it moves between tiers, and to encrypt communication between the Connector and both the ONTAP cluster and the object store.

Do I need an Ethernet port installed and configured on my AFF?

Yes. An intercluster LIF must be configured on an ethernet port, on each node within an HA pair that hosts volumes with data you plan to tier to the cloud. For more information, see the Requirements section for the cloud provider where you plan to tier data.