Tiering data from on-premises ONTAP clusters to Google Cloud Storage

Contributors netapp-tonacki

Free space on your on-prem ONTAP clusters by tiering inactive data to Google Cloud Storage.

Quick start

Get started quickly by following these steps, or scroll down to the remaining sections for full details.

One Prepare to tier data to Google Cloud Storage

You need the following:

  • An on-prem ONTAP cluster that’s running ONTAP 9.6 or later and has an HTTPS connection to Google Cloud Storage. Learn how to discover a cluster.

  • A service account that has the predefined Storage Admin role and storage access keys.

  • A Connector installed in a Google Cloud Platform VPC.

  • Networking for the Connector that enables an outbound HTTPS connection to the ONTAP cluster in your data center, to Google Cloud Storage, and to the Cloud Tiering service.

Two Set up tiering

In BlueXP, select an on-prem working environment, click Enable for the Tiering service, and follow the prompts to tier data to Google Cloud Storage.

Three Set up licensing

After your free trial ends, pay for Cloud Tiering through a pay-as-you-go subscription, an ONTAP Cloud Tiering BYOL license, or a combination of both:


Verify support for your ONTAP cluster, set up your networking, and prepare your object storage.

The following image shows each component and the connections that you need to prepare between them:

An architecture image that shows the Cloud Tiering service with a connection to the Connector in your cloud provider, the Connector with a connection to your ONTAP cluster, and a connection between the ONTAP cluster and object storage in your cloud provider. Active data resides on the ONTAP cluster, while inactive data resides in object storage.

Note Communication between the Connector and Google Cloud Storage is for object storage setup only.

Preparing your ONTAP clusters

Your ONTAP clusters must meet the following requirements when tiering data to Google Cloud Storage.

Supported ONTAP platforms
  • When using ONTAP 9.8 and later: You can tier data from AFF systems, or FAS systems with all-SSD aggregates or all-HDD aggregates.

  • When using ONTAP 9.7 and earlier: You can tier data from AFF systems, or FAS systems with all-SSD aggregates.

Supported ONTAP versions

ONTAP 9.6 or later

Required application access parameter

The cluster admin user must have “console” Application access. You can verify this using the ONTAP command security login show. "console" should appear in the Application column for the "admin" user. Use the security login create command to add console application access if necessary. See the "security login" commands for details.

Cluster networking requirements
  • The ONTAP cluster initiates an HTTPS connection over port 443 to Google Cloud Storage.

    ONTAP reads and writes data to and from object storage. The object storage never initiates, it just responds.

    Although a Google Cloud Interconnect provides better performance and lower data transfer charges, it’s not required between the ONTAP cluster and Google Cloud Storage. But doing so is the recommended best practice.

  • An inbound connection is required from the Connector, which resides in a Google Cloud Platform VPC.

    A connection between the cluster and the Cloud Tiering service is not required.

  • An intercluster LIF is required on each ONTAP node that hosts the volumes you want to tier. The LIF must be associated with the IPspace that ONTAP should use to connect to object storage.

    When you set up data tiering, Cloud Tiering prompts you for the IPspace to use. You should choose the IPspace that each LIF is associated with. That might be the "Default" IPspace or a custom IPspace that you created. Learn more about LIFs and IPspaces.

Supported volumes and aggregates

The total number of volumes that Cloud Tiering can tier might be less than the number of volumes on your ONTAP system. That’s because volumes can’t be tiered from some aggregates. Refer to the ONTAP documentation for functionality or features not supported by FabricPool.

Note Cloud Tiering supports FlexGroup volumes. Setup works the same as any other volume.

Discovering an ONTAP cluster

You need to create an on-prem ONTAP working environment in BlueXP before you can start tiering cold data.

Creating or switching Connectors

A Connector is required to tier data to the cloud. When tiering data to Google Cloud Storage, a Connector must be available in a Google Cloud Platform VPC. You’ll either need to create a new Connector or make sure that the currently selected Connector resides in GCP.

Preparing networking for the Connector

Ensure that the Connector has the required networking connections.

  1. Ensure that the VPC where the Connector is installed enables the following connections:

    • An outbound internet connection to the Cloud Tiering service over port 443 (HTTPS)

    • An HTTPS connection over port 443 to Google Cloud Storage

    • An HTTPS connection over port 443 to your ONTAP cluster management LIF

  2. Optional: Enable Private Google Access on the subnet where you plan to deploy the Connector.

    Private Google Access is recommended if you have a direct connection from your ONTAP cluster to the VPC and you want communication between the Connector and Google Cloud Storage to stay in your virtual private network. Note that Private Google Access works with VM instances that have only internal (private) IP addresses (no external IP addresses).

Preparing Google Cloud Storage

When you set up tiering, you need to provide storage access keys for a service account that has Storage Admin permissions. A service account enables Cloud Tiering to authenticate and access Cloud Storage buckets used for data tiering. The keys are required so that Google Cloud Storage knows who is making the request.

The Cloud Storage buckets must be in a region that supports Cloud Tiering.

Note If you are planning to configure Cloud Tiering to use lower cost storage classes where your tiered data will transition to after a certain number of days, you must not select any life cycle rules when setting up the bucket in your GCP account. Cloud Tiering manages the life cycle transitions.
  1. Create a service account that has the predefined Storage Admin role.

  2. Go to GCP Storage Settings and create access keys for the service account:

    1. Select a project, and click Interoperability. If you haven’t already done so, click Enable interoperability access.

    2. Under Access keys for service accounts, click Create a key for a service account, select the service account that you just created, and click Create Key.

      You’ll need to enter the keys later when you set up Cloud Tiering.

Tiering inactive data from your first cluster to Google Cloud Storage

After you prepare your Google Cloud environment, start tiering inactive data from your first cluster.

What you’ll need
  1. Select an on-prem cluster.

  2. Click Enable for the Tiering service.

    If the Google Cloud Storage tiering destination exists as a working environment on the Canvas, you can drag the cluster onto the Google Cloud Storage working environment to initiate the setup wizard.

    A screenshot that shows the Enable option that appears on the right side of the screen after you select an on-prem ONTAP working environment.

  3. Define Object Storage Name: Enter a name for this object storage. It must be unique from any other object storage you may be using with aggregates on this cluster.

  4. Select Provider: Select Google Cloud and click Continue.

  5. Complete the steps on the Create Object Storage pages:

    1. Bucket: Add a new Google Cloud Storage bucket or select an existing bucket.

    2. Storage Class Life Cycle: Cloud Tiering manages the life cycle transitions of your tiered data. Data starts in the Standard class, but you can create rules to move the data to other classes after a certain number of days.

      Select the Google Cloud storage class that you want to transition the tiered data to and the number of days before the data will be moved, and click Continue. For example, the screenshot below shows that tiered data is moved from the Standard class to the Nearline class after 30 days in object storage, and then to the Coldline class after 60 days in object storage.

      If you choose Keep data in this storage class, then the data remains in the that storage class. See supported storage classes.

      A screenshot showing how to select additional storage classes where data is moved after a certain number of days.

      Note that the life cycle rule is applied to all objects in the selected bucket.

    3. Credentials: Enter the storage access key and secret key for a service account that has the Storage Admin role.

    4. Cluster Network: Select the IPspace that ONTAP should use to connect to object storage.

      Selecting the correct IPspace ensures that Cloud Tiering can set up a connection from ONTAP to your cloud provider’s object storage.

  6. Click Continue to select the volumes that you want to tier.

  7. On the Tier Volumes page, select the volumes that you want to configure tiering for and launch the Tiering Policy page:

    • To select all volumes, check the box in the title row (button backup all volumes) and click Configure volumes.

    • To select multiple volumes, check the box for each volume (button backup 1 volume) and click Configure volumes.

    • To select a single volume, click the row (or edit pencil icon icon) for the volume.

      A screenshot that shows how to select a single volume

  8. In the Tiering Policy dialog, select a tiering policy, optionally adjust the cooling days for the selected volumes, and click Apply.

    A screenshot that shows the configurable tiering policy settings.


You’ve successfully set up data tiering from volumes on the cluster to Google Cloud object storage.

You can review information about the active and inactive data on the cluster. Learn more about managing your tiering settings.

You can also create additional object storage in cases where you may want to tier data from certain aggregates on a cluster to different object stores. Or if you plan to use FabricPool Mirroring where your tiered data is replicated to an additional object store. Learn more about managing object stores.